Get trained models Generally available

GET /_cat/ml/trained_models/{model_id}

All methods and paths for this operation:

GET /_cat/ml/trained_models

GET /_cat/ml/trained_models/{model_id}

Get configuration and usage information about inference trained models.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get trained models statistics API.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • model_id string Required

    A unique identifier for the trained model.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request: contains wildcard expressions and there are no models that match; contains the _all string or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches. If true, the API returns an empty array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    A comma-separated list of column names to display.

    Supported values include:

    • create_time (or ct): The time when the trained model was created.
    • created_by (or c, createdBy): Information on the creator of the trained model.
    • data_frame_analytics_id (or df, dataFrameAnalytics, dfid): Identifier for the data frame analytics job that created the model. Only displayed if it is still available.
    • description (or d): The description of the trained model.
    • heap_size (or hs, modelHeapSize): The estimated heap size to keep the trained model in memory.
    • id: Identifier for the trained model.
    • ingest.count (or ic, ingestCount): The total number of documents that are processed by the model.
    • ingest.current (or icurr, ingestCurrent): The total number of document that are currently being handled by the trained model.
    • ingest.failed (or if, ingestFailed): The total number of failed ingest attempts with the trained model.
    • ingest.pipelines (or ip, ingestPipelines): The total number of ingest pipelines that are referencing the trained model.
    • ingest.time (or it, ingestTime): The total time that is spent processing documents with the trained model.
    • license (or l): The license level of the trained model.
    • operations (or o, modelOperations): The estimated number of operations to use the trained model. This number helps measuring the computational complexity of the model.
    • version (or v): The Elasticsearch version number in which the trained model was created.
  • s string | array[string]

    A comma-separated list of column names or aliases used to sort the response.

    Supported values include:

    • create_time (or ct): The time when the trained model was created.
    • created_by (or c, createdBy): Information on the creator of the trained model.
    • data_frame_analytics_id (or df, dataFrameAnalytics, dfid): Identifier for the data frame analytics job that created the model. Only displayed if it is still available.
    • description (or d): The description of the trained model.
    • heap_size (or hs, modelHeapSize): The estimated heap size to keep the trained model in memory.
    • id: Identifier for the trained model.
    • ingest.count (or ic, ingestCount): The total number of documents that are processed by the model.
    • ingest.current (or icurr, ingestCurrent): The total number of document that are currently being handled by the trained model.
    • ingest.failed (or if, ingestFailed): The total number of failed ingest attempts with the trained model.
    • ingest.pipelines (or ip, ingestPipelines): The total number of ingest pipelines that are referencing the trained model.
    • ingest.time (or it, ingestTime): The total time that is spent processing documents with the trained model.
    • license (or l): The license level of the trained model.
    • operations (or o, modelOperations): The estimated number of operations to use the trained model. This number helps measuring the computational complexity of the model.
    • version (or v): The Elasticsearch version number in which the trained model was created.
  • from number

    Skips the specified number of transforms.

  • size number

    The maximum number of transforms to display.

  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • created_by string

      Information about the creator of the model.

    • heap_size number | string

    • operations string

      The estimated number of operations to use the model. This number helps to measure the computational complexity of the model.

    • license string

      The license level of the model.

    • create_time string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      One of:
    • version string
    • description string

      A description of the model.

    • ingest.pipelines string

      The number of pipelines that are referencing the model.

    • ingest.count string

      The total number of documents that are processed by the model.

    • ingest.time string

      The total time spent processing documents with thie model.

    • ingest.current string

      The total number of documents that are currently being handled by the model.

    • ingest.failed string

      The total number of failed ingest attempts with the model.

    • data_frame.id string

      The identifier for the data frame analytics job that created the model. Only displayed if the job is still available.

    • data_frame.create_time string

      The time the data frame analytics job was created.

    • data_frame.source_index string

      The source index used to train in the data frame analysis.

    • data_frame.analysis string

      The analysis used by the data frame to build the model.

    • type string Generally available
GET /_cat/ml/trained_models/{model_id}
GET _cat/ml/trained_models?v=true&format=json
resp = client.cat.ml_trained_models(
    v=True,
    format="json",
)
const response = await client.cat.mlTrainedModels({
  v: "true",
  format: "json",
});
response = client.cat.ml_trained_models(
  v: "true",
  format: "json"
)
$resp = $client->cat()->mlTrainedModels([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/ml/trained_models?v=true&format=json"
client.cat().mlTrainedModels();
Response examples (200)
A successful response from `GET _cat/ml/trained_models?v=true&format=json`.
[
  {
    "id": "ddddd-1580216177138",
    "heap_size": "0b",
    "operations": "196",
    "create_time": "2025-03-25T00:01:38.662Z",
    "type": "pytorch",
    "ingest.pipelines": "0",
    "data_frame.id": "__none__"
  },
  {
    "id": "lang_ident_model_1",
    "heap_size": "1mb",
    "operations": "39629",
    "create_time": "2019-12-05T12:28:34.594Z",
    "type": "lang_ident",
    "ingest.pipelines": "0",
    "data_frame.id": "__none__"
  }
]





Get cluster info Generally available

GET /_info/{target}

Returns basic information about the cluster.

Path parameters

  • target string | array[string]

    Limits the information returned to the specific target. Supports a comma-separated list, such as http,ingest.

    Supported values include: _all, http, ingest, thread_pool, script

    Values are _all, http, ingest, thread_pool, or script.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • cluster_name string Required
    • http object
      Hide http attributes Show http attributes object
      • current_open number

        Current number of open HTTP connections for the node.

      • total_opened number

        Total number of HTTP connections opened for the node.

      • clients array[object]

        Information on current and recently-closed HTTP client connections. Clients that have been closed longer than the http.client_stats.closed_channels.max_age setting will not be represented here.

        Hide clients attributes Show clients attributes object
        • id number

          Unique ID for the HTTP client.

        • agent string

          Reported agent for the HTTP client. If unavailable, this property is not included in the response.

        • local_address string

          Local address for the HTTP connection.

        • remote_address string

          Remote address for the HTTP connection.

        • last_uri string

          The URI of the client’s most recent request.

        • opened_time_millis number

          Time at which the client opened the connection.

        • closed_time_millis number

          Time at which the client closed the connection if the connection is closed.

        • last_request_time_millis number

          Time of the most recent request from this client.

        • request_count number

          Number of requests from this client.

        • request_size_bytes number

          Cumulative size in bytes of all requests from this client.

        • x_opaque_id string

          Value from the client’s x-opaque-id HTTP header. If unavailable, this property is not included in the response.

    • ingest object
      Hide ingest attributes Show ingest attributes object
      • pipelines object

        Contains statistics about ingest pipelines for the node.

        Hide pipelines attribute Show pipelines attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • count number Required

            Total number of documents ingested during the lifetime of this node.

          • current number Required

            Total number of documents currently being ingested.

          • failed number Required

            Total number of failed ingest operations during the lifetime of this node.

          • processors array[object] Required

            Total number of ingest processors.

            Hide processors attribute Show processors attribute object
            • * object Additional properties
          • time_in_millis number

            Time unit for milliseconds

          • ingested_as_first_pipeline_in_bytes number Required Generally available

            Total number of bytes of all documents ingested by the pipeline. This field is only present on pipelines which are the first to process a document. Thus, it is not present on pipelines which only serve as a final pipeline after a default pipeline, a pipeline run after a reroute processor, or pipelines in pipeline processors.

          • produced_as_first_pipeline_in_bytes number Required Generally available

            Total number of bytes of all documents produced by the pipeline. This field is only present on pipelines which are the first to process a document. Thus, it is not present on pipelines which only serve as a final pipeline after a default pipeline, a pipeline run after a reroute processor, or pipelines in pipeline processors. In situations where there are subsequent pipelines, the value represents the size of the document after all pipelines have run.

      • total object
        Hide total attributes Show total attributes object
        • count number Required

          Total number of documents ingested during the lifetime of this node.

        • current number Required

          Total number of documents currently being ingested.

        • failed number Required

          Total number of failed ingest operations during the lifetime of this node.

        • time_in_millis number

          Time unit for milliseconds

    • thread_pool object
      Hide thread_pool attribute Show thread_pool attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • active number

          Number of active threads in the thread pool.

        • completed number

          Number of tasks completed by the thread pool executor.

        • largest number

          Highest number of active threads in the thread pool.

        • queue number

          Number of tasks in queue for the thread pool.

        • rejected number

          Number of tasks rejected by the thread pool executor.

        • threads number

          Number of threads in the thread pool.

    • script object
      Hide script attributes Show script attributes object
      • cache_evictions number

        Total number of times the script cache has evicted old data.

      • compilations number

        Total number of inline script compilations performed by the node.

      • compilations_history object

        Contains this recent history of script compilations.

        Hide compilations_history attribute Show compilations_history attribute object
        • * number Additional properties
      • compilation_limit_triggered number

        Total number of times the script compilation circuit breaker has limited inline script compilations.

      • contexts array[object]
        Hide contexts attributes Show contexts attributes object
        • context string
        • compilations number
        • cache_evictions number
        • compilation_limit_triggered number
GET /_info/_all
resp = client.cluster.info(
    target="_all",
)
const response = await client.cluster.info({
  target: "_all",
});
response = client.cluster.info(
  target: "_all"
)
$resp = $client->cluster()->info([
    "target" => "_all",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_info/_all"
client.cluster().info(i -> i
    .target("_all")
);













































Create a connector sync job Beta

POST /_connector/_sync_job

Create a connector sync job document in the internal index and initialize its counters and timestamps with default values.

application/json

Body Required

  • id string Required
  • job_type string

    Values are full, incremental, or access_control.

  • trigger_method string

    Values are on_demand or scheduled.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • id string Required
POST _connector/_sync_job
{
  "id": "connector-id",
  "job_type": "full",
  "trigger_method": "on_demand"
}
resp = client.connector.sync_job_post(
    id="connector-id",
    job_type="full",
    trigger_method="on_demand",
)
const response = await client.connector.syncJobPost({
  id: "connector-id",
  job_type: "full",
  trigger_method: "on_demand",
});
response = client.connector.sync_job_post(
  body: {
    "id": "connector-id",
    "job_type": "full",
    "trigger_method": "on_demand"
  }
)
$resp = $client->connector()->syncJobPost([
    "body" => [
        "id" => "connector-id",
        "job_type" => "full",
        "trigger_method" => "on_demand",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"id":"connector-id","job_type":"full","trigger_method":"on_demand"}' "$ELASTICSEARCH_URL/_connector/_sync_job"
client.connector().syncJobPost(s -> s
    .id("connector-id")
    .jobType(SyncJobType.Full)
    .triggerMethod(SyncJobTriggerMethod.OnDemand)
);
Request example
{
  "id": "connector-id",
  "job_type": "full",
  "trigger_method": "on_demand"
}




















Update the connector draft filtering validation Technical preview

PUT /_connector/{connector_id}/_filtering/_validation

Update the draft filtering validation info for a connector.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • validation object Required
    Hide validation attributes Show validation attributes object
    • errors array[object] Required
      Hide errors attributes Show errors attributes object
      • ids array[string] Required
      • messages array[string] Required
    • state string Required

      Values are edited, invalid, or valid.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_filtering/_validation
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_filtering/_validation' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"validation":{"errors":[{"ids":["string"],"messages":["string"]}],"state":"edited"}}'









































Get the status for a data stream lifecycle Generally available

GET /{index}/_lifecycle/explain

Get information about an index or data stream's current data stream lifecycle status, such as time since index creation, time since rollover, the lifecycle configuration managing the index, or any errors encountered during lifecycle execution.

External documentation

Path parameters

  • index string | array[string] Required

    The name of the index to explain

Query parameters

  • include_defaults boolean

    indicates if the API should return the default values the system uses for the index's lifecycle

  • master_timeout string

    Specify timeout for connection to master

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • indices object Required
      Hide indices attribute Show indices attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • index string Required
        • managed_by_lifecycle boolean Required
        • index_creation_date_millis number

          Time unit for milliseconds

        • time_since_index_creation string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • rollover_date_millis number

          Time unit for milliseconds

        • time_since_rollover string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • lifecycle object

          Data stream lifecycle with rollover can be used to display the configuration including the default rollover conditions, if asked.

          Hide lifecycle attributes Show lifecycle attributes object
          • data_retention string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • downsampling object
            Hide downsampling attribute Show downsampling attribute object
            • rounds array[object] Required

              The list of downsampling rounds to execute as part of this downsampling configuration

          • enabled boolean

            If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

            Default value is true.

          • rollover object
            Hide rollover attributes Show rollover attributes object
            • min_age string

              A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

            • max_age string
            • min_docs number
            • max_docs number
            • min_size
            • max_size
            • min_primary_shard_size
            • max_primary_shard_size
            • min_primary_shard_docs number
            • max_primary_shard_docs number
        • generation_time string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • error string
GET .ds-metrics-2023.03.22-000001/_lifecycle/explain
resp = client.indices.explain_data_lifecycle(
    index=".ds-metrics-2023.03.22-000001",
)
const response = await client.indices.explainDataLifecycle({
  index: ".ds-metrics-2023.03.22-000001",
});
response = client.indices.explain_data_lifecycle(
  index: ".ds-metrics-2023.03.22-000001"
)
$resp = $client->indices()->explainDataLifecycle([
    "index" => ".ds-metrics-2023.03.22-000001",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/.ds-metrics-2023.03.22-000001/_lifecycle/explain"
client.indices().explainDataLifecycle(e -> e
    .index(".ds-metrics-2023.03.22-000001")
);
Response examples (200)
A successful response from `GET .ds-metrics-2023.03.22-000001/_lifecycle/explain`, which retrieves the lifecycle status for a data stream backing index. If the index is managed by a data stream lifecycle, the API will show the `managed_by_lifecycle` field set to `true` and the rest of the response will contain information about the lifecycle execution status for this index.
{
  "indices": {
    ".ds-metrics-2023.03.22-000001": {
      "index" : ".ds-metrics-2023.03.22-000001",
      "managed_by_lifecycle" : true,
      "index_creation_date_millis" : 1679475563571,
      "time_since_index_creation" : "843ms",
      "rollover_date_millis" : 1679475564293,
      "time_since_rollover" : "121ms",
      "lifecycle" : { },
      "generation_time" : "121ms"
  }
}
The API reports any errors related to the lifecycle execution for the target index.
{
  "indices": {
    ".ds-metrics-2023.03.22-000001": {
      "index" : ".ds-metrics-2023.03.22-000001",
      "managed_by_lifecycle" : true,
      "index_creation_date_millis" : 1679475563571,
      "time_since_index_creation" : "843ms",
      "lifecycle" : {
        "enabled": true
      },
      "error": "{\"type\":\"validation_exception\",\"reason\":\"Validation Failed: 1: this action would add [2] shards, but this cluster
currently has [4]/[3] maximum normal shards open;\"}"
  }
}
















































Delete a document Generally available

DELETE /{index}/_doc/{id}

Remove a JSON document from the specified index.

NOTE: You cannot send deletion requests directly to a data stream. To delete a document in a data stream, you must target the backing index containing the document.

Optimistic concurrency control

Delete operations can be made conditional and only be performed if the last modification to the document was assigned the sequence number and primary term specified by the if_seq_no and if_primary_term parameters. If a mismatch is detected, the operation will result in a VersionConflictException and a status code of 409.

Versioning

Each document indexed is versioned. When deleting a document, the version can be specified to make sure the relevant document you are trying to delete is actually being deleted and it has not changed in the meantime. Every write operation run on a document, deletes included, causes its version to be incremented. The version number of a deleted document remains available for a short time after deletion to allow for control of concurrent operations. The length of time for which a deleted document's version remains available is determined by the index.gc_deletes index setting.

Routing

If routing is used during indexing, the routing value also needs to be specified to delete a document.

If the _routing mapping is set to required and no routing value is specified, the delete API throws a RoutingMissingException and rejects the request.

For example:

DELETE /my-index-000001/_doc/1?routing=shard-1

This request deletes the document with ID 1, but it is routed based on the user. The document is not deleted if the correct routing is not specified.

Distributed

The delete operation gets hashed into a specific shard ID. It then gets redirected into the primary shard within that ID group and replicated (if needed) to shard replicas within that ID group.

Required authorization

  • Index privileges: delete

Path parameters

  • index string Required

    The name of the target index.

  • id string Required

    A unique identifier for the document.

Query parameters

  • if_primary_term number

    Only perform the operation if the document has this primary term.

  • if_seq_no number

    Only perform the operation if the document has this sequence number.

  • refresh string

    If true, Elasticsearch refreshes the affected shards to make this operation visible to search. If wait_for, it waits for a refresh to make this operation visible to search. If false, it does nothing with refreshes.

    Values are true, false, or wait_for.

  • routing string

    A custom value used to route operations to a specific shard.

  • timeout string

    The period to wait for active shards.

    This parameter is useful for situations where the primary shard assigned to perform the delete operation might not be available when the delete operation runs. Some reasons for this might be that the primary shard is currently recovering from a store or undergoing relocation. By default, the delete operation will wait on the primary shard to become available for up to 1 minute before failing and responding with an error.

    Values are -1 or 0.

  • version number

    An explicit version number for concurrency control. It must match the current version of the document for the request to succeed.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

  • wait_for_active_shards number | string

    The minimum number of shard copies that must be active before proceeding with the operation. You can set it to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The default value of 1 means it waits for each primary shard to be active.

    Values are all or index-setting.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _id string Required
    • _index string Required
    • _primary_term number

      The primary term assigned to the document for the indexing operation.

    • result string Required

      Values are created, updated, deleted, not_found, or noop.

    • _seq_no number
    • _shards object Required
      Hide _shards attributes Show _shards attributes object
      • failed number Required
      • successful number Required
      • total number Required
      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Hide reason attributes Show reason attributes object
          • type string Required

            The type of error

          • reason string | null

            A human-readable explanation of the error, in English.

          • stack_trace string

            The server stack trace. Present only if the error_trace=true parameter was sent with the request.

          • caused_by object

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          • root_cause array[object]

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          • suppressed array[object]

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number Required
        • status string
      • skipped number
    • _version number Required
    • forced_refresh boolean
DELETE /my-index-000001/_doc/1
resp = client.delete(
    index="my-index-000001",
    id="1",
)
const response = await client.delete({
  index: "my-index-000001",
  id: 1,
});
response = client.delete(
  index: "my-index-000001",
  id: "1"
)
$resp = $client->delete([
    "index" => "my-index-000001",
    "id" => "1",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_doc/1"
client.delete(d -> d
    .id("1")
    .index("my-index-000001")
);
Response examples (200)
A successful response from `DELETE /my-index-000001/_doc/1`, which deletes the JSON document 1 from the `my-index-000001` index.
{
  "_shards": {
    "total": 2,
    "failed": 0,
    "successful": 2
  },
  "_index": "my-index-000001",
  "_id": "1",
  "_version": 2,
  "_primary_term": 1,
  "_seq_no": 5,
  "result": "deleted"
}













































Create an enrich policy Generally available

PUT /_enrich/policy/{name}

Creates an enrich policy.

Path parameters

  • name string Required

    Name of the enrich policy to create or update.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

application/json

Body Required

  • geo_match object Additional properties
    Hide geo_match attributes Show geo_match attributes object
    • enrich_fields string | array[string] Required
    • indices string | array[string] Required
    • match_field string Required

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • query object

      An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      External documentation
    • name string
    • elasticsearch_version string
  • match object Additional properties
    Hide match attributes Show match attributes object
    • enrich_fields string | array[string] Required
    • indices string | array[string] Required
    • match_field string Required

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • query object

      An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      External documentation
    • name string
    • elasticsearch_version string
  • range object Additional properties
    Hide range attributes Show range attributes object
    • enrich_fields string | array[string] Required
    • indices string | array[string] Required
    • match_field string Required

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • query object

      An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      External documentation
    • name string
    • elasticsearch_version string

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

PUT /_enrich/policy/postal_policy
{
  "geo_match": {
    "indices": "postal_codes",
    "match_field": "location",
    "enrich_fields": [ "location", "postal_code" ]
  }
}
resp = client.enrich.put_policy(
    name="postal_policy",
    geo_match={
        "indices": "postal_codes",
        "match_field": "location",
        "enrich_fields": [
            "location",
            "postal_code"
        ]
    },
)
const response = await client.enrich.putPolicy({
  name: "postal_policy",
  geo_match: {
    indices: "postal_codes",
    match_field: "location",
    enrich_fields: ["location", "postal_code"],
  },
});
response = client.enrich.put_policy(
  name: "postal_policy",
  body: {
    "geo_match": {
      "indices": "postal_codes",
      "match_field": "location",
      "enrich_fields": [
        "location",
        "postal_code"
      ]
    }
  }
)
$resp = $client->enrich()->putPolicy([
    "name" => "postal_policy",
    "body" => [
        "geo_match" => [
            "indices" => "postal_codes",
            "match_field" => "location",
            "enrich_fields" => array(
                "location",
                "postal_code",
            ),
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"geo_match":{"indices":"postal_codes","match_field":"location","enrich_fields":["location","postal_code"]}}' "$ELASTICSEARCH_URL/_enrich/policy/postal_policy"
client.enrich().putPolicy(p -> p
    .geoMatch(g -> g
        .enrichFields(List.of("location","postal_code"))
        .indices("postal_codes")
        .matchField("location")
    )
    .name("postal_policy")
);
Request example
An example body for a `PUT /_enrich/policy/postal_policy` request.
{
  "geo_match": {
    "indices": "postal_codes",
    "match_field": "location",
    "enrich_fields": [ "location", "postal_code" ]
  }
}





































































































































































































































































Perform sparse embedding inference on the service Generally available

POST /_inference/sparse_embedding/{inference_id}

Path parameters

  • inference_id string Required

    The inference Id

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference request to complete.

    Values are -1 or 0.

application/json

Body

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • sparse_embedding array[object] Required
      Hide sparse_embedding attribute Show sparse_embedding attribute object
      • embedding object Required

        Sparse Embedding tokens are represented as a dictionary of string to double.

        Hide embedding attribute Show embedding attribute object
        • * number Additional properties
POST /_inference/sparse_embedding/{inference_id}
POST _inference/sparse_embedding/my-elser-model
{
  "input": "The sky above the port was the color of television tuned to a dead channel."
}
resp = client.inference.sparse_embedding(
    inference_id="my-elser-model",
    input="The sky above the port was the color of television tuned to a dead channel.",
)
const response = await client.inference.sparseEmbedding({
  inference_id: "my-elser-model",
  input:
    "The sky above the port was the color of television tuned to a dead channel.",
});
response = client.inference.sparse_embedding(
  inference_id: "my-elser-model",
  body: {
    "input": "The sky above the port was the color of television tuned to a dead channel."
  }
)
$resp = $client->inference()->sparseEmbedding([
    "inference_id" => "my-elser-model",
    "body" => [
        "input" => "The sky above the port was the color of television tuned to a dead channel.",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"input":"The sky above the port was the color of television tuned to a dead channel."}' "$ELASTICSEARCH_URL/_inference/sparse_embedding/my-elser-model"
client.inference().sparseEmbedding(s -> s
    .inferenceId("my-elser-model")
    .input("The sky above the port was the color of television tuned to a dead channel.")
);
Request example
Run `POST _inference/sparse_embedding/my-elser-model` to perform sparse embedding on the example sentence.
{
  "input": "The sky above the port was the color of television tuned to a dead channel."
}
Response examples (200)
An abbreviated response from `POST _inference/sparse_embedding/my-elser-model`.
{
  "sparse_embedding": [
    {
      "port": 2.1259406,
      "sky": 1.7073475,
      "color": 1.6922266,
      "dead": 1.6247464,
      "television": 1.3525393,
      "above": 1.2425821,
      "tuned": 1.1440028,
      "colors": 1.1218185,
      "tv": 1.0111054,
      "ports": 1.0067928,
      "poem": 1.0042328,
      "channel": 0.99471164,
      "tune": 0.96235967,
      "scene": 0.9020516
    }
  ]
}








































Create or update a Logstash pipeline Generally available

PUT /_logstash/pipeline/{id}

Create a pipeline that is used for Logstash Central Management. If the specified pipeline exists, it is replaced.

Required authorization

  • Cluster privileges: manage_logstash_pipelines
External documentation

Path parameters

  • id string Required

    An identifier for the pipeline.

application/json

Body Required

  • description string Required

    A description of the pipeline. This description is not used by Elasticsearch or Logstash.

  • last_modified string | number Required

    A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

    One of:
  • pipeline string Required

    The configuration for the pipeline.

    External documentation
  • pipeline_metadata object Required
    Hide pipeline_metadata attributes Show pipeline_metadata attributes object
    • type string Required
    • version string Required
  • pipeline_settings object Required
    Hide pipeline_settings attributes Show pipeline_settings attributes object
    • pipeline.workers number Required

      The number of workers that will, in parallel, execute the filter and output stages of the pipeline.

    • pipeline.batch.size number Required

      The maximum number of events an individual worker thread will collect from inputs before attempting to execute its filters and outputs.

    • pipeline.batch.delay number Required

      When creating pipeline event batches, how long in milliseconds to wait for each event before dispatching an undersized batch to pipeline workers.

    • queue.type string Required

      The internal queuing model to use for event buffering.

    • queue.max_bytes string Required

      The total capacity of the queue (queue.type: persisted) in number of bytes.

    • queue.checkpoint.writes number Required

      The maximum number of written events before forcing a checkpoint when persistent queues are enabled (queue.type: persisted).

  • username string Required

    The user who last updated the pipeline.

Responses

  • 200 application/json
PUT _logstash/pipeline/my_pipeline
{
  "description": "Sample pipeline for illustration purposes",
  "last_modified": "2021-01-02T02:50:51.250Z",
  "pipeline_metadata": {
    "type": "logstash_pipeline",
    "version": 1
  },
  "username": "elastic",
  "pipeline": "input {}\\n filter { grok {} }\\n output {}",
  "pipeline_settings": {
    "pipeline.workers": 1,
    "pipeline.batch.size": 125,
    "pipeline.batch.delay": 50,
    "queue.type": "memory",
    "queue.max_bytes": "1gb",
    "queue.checkpoint.writes": 1024
  }
}
resp = client.logstash.put_pipeline(
    id="my_pipeline",
    pipeline={
        "description": "Sample pipeline for illustration purposes",
        "last_modified": "2021-01-02T02:50:51.250Z",
        "pipeline_metadata": {
            "type": "logstash_pipeline",
            "version": 1
        },
        "username": "elastic",
        "pipeline": "input {}\\n filter { grok {} }\\n output {}",
        "pipeline_settings": {
            "pipeline.workers": 1,
            "pipeline.batch.size": 125,
            "pipeline.batch.delay": 50,
            "queue.type": "memory",
            "queue.max_bytes": "1gb",
            "queue.checkpoint.writes": 1024
        }
    },
)
const response = await client.logstash.putPipeline({
  id: "my_pipeline",
  pipeline: {
    description: "Sample pipeline for illustration purposes",
    last_modified: "2021-01-02T02:50:51.250Z",
    pipeline_metadata: {
      type: "logstash_pipeline",
      version: 1,
    },
    username: "elastic",
    pipeline: "input {}\\n filter { grok {} }\\n output {}",
    pipeline_settings: {
      "pipeline.workers": 1,
      "pipeline.batch.size": 125,
      "pipeline.batch.delay": 50,
      "queue.type": "memory",
      "queue.max_bytes": "1gb",
      "queue.checkpoint.writes": 1024,
    },
  },
});
response = client.logstash.put_pipeline(
  id: "my_pipeline",
  body: {
    "description": "Sample pipeline for illustration purposes",
    "last_modified": "2021-01-02T02:50:51.250Z",
    "pipeline_metadata": {
      "type": "logstash_pipeline",
      "version": 1
    },
    "username": "elastic",
    "pipeline": "input {}\\n filter { grok {} }\\n output {}",
    "pipeline_settings": {
      "pipeline.workers": 1,
      "pipeline.batch.size": 125,
      "pipeline.batch.delay": 50,
      "queue.type": "memory",
      "queue.max_bytes": "1gb",
      "queue.checkpoint.writes": 1024
    }
  }
)
$resp = $client->logstash()->putPipeline([
    "id" => "my_pipeline",
    "body" => [
        "description" => "Sample pipeline for illustration purposes",
        "last_modified" => "2021-01-02T02:50:51.250Z",
        "pipeline_metadata" => [
            "type" => "logstash_pipeline",
            "version" => 1,
        ],
        "username" => "elastic",
        "pipeline" => "input {}\\n filter { grok {} }\\n output {}",
        "pipeline_settings" => [
            "pipeline.workers" => 1,
            "pipeline.batch.size" => 125,
            "pipeline.batch.delay" => 50,
            "queue.type" => "memory",
            "queue.max_bytes" => "1gb",
            "queue.checkpoint.writes" => 1024,
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"description":"Sample pipeline for illustration purposes","last_modified":"2021-01-02T02:50:51.250Z","pipeline_metadata":{"type":"logstash_pipeline","version":1},"username":"elastic","pipeline":"input {}\\n filter { grok {} }\\n output {}","pipeline_settings":{"pipeline.workers":1,"pipeline.batch.size":125,"pipeline.batch.delay":50,"queue.type":"memory","queue.max_bytes":"1gb","queue.checkpoint.writes":1024}}' "$ELASTICSEARCH_URL/_logstash/pipeline/my_pipeline"
client.logstash().putPipeline(p -> p
    .id("my_pipeline")
    .pipeline(pi -> pi
        .description("Sample pipeline for illustration purposes")
        .lastModified(DateTime.of("2021-01-02T02:50:51.250Z"))
        .pipeline("input {}\n filter { grok {} }\n output {}")
        .pipelineMetadata(pip -> pip
            .type("logstash_pipeline")
            .version("1")
        )
        .pipelineSettings(pip -> pip
            .pipelineWorkers(1)
            .pipelineBatchSize(125)
            .pipelineBatchDelay(50)
            .queueType("memory")
            .queueMaxBytes("1gb")
            .queueCheckpointWrites(1024)
        )
        .username("elastic")
    )
);
Request example
Run `PUT _logstash/pipeline/my_pipeline` to create a pipeline.
{
  "description": "Sample pipeline for illustration purposes",
  "last_modified": "2021-01-02T02:50:51.250Z",
  "pipeline_metadata": {
    "type": "logstash_pipeline",
    "version": 1
  },
  "username": "elastic",
  "pipeline": "input {}\\n filter { grok {} }\\n output {}",
  "pipeline_settings": {
    "pipeline.workers": 1,
    "pipeline.batch.size": 125,
    "pipeline.batch.delay": 50,
    "queue.type": "memory",
    "queue.max_bytes": "1gb",
    "queue.checkpoint.writes": 1024
  }
}




Machine learning anomaly detection









Get calendar configuration info Generally available

POST /_ml/calendars/{calendar_id}

All methods and paths for this operation:

GET /_ml/calendars

POST /_ml/calendars
GET /_ml/calendars/{calendar_id}
POST /_ml/calendars/{calendar_id}

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • calendar_id string Required

    A string that uniquely identifies a calendar. You can get information for multiple calendars by using a comma-separated list of ids or a wildcard expression. You can get information for all calendars by using _all or * or by omitting the calendar identifier.

Query parameters

  • from number

    Skips the specified number of calendars. This parameter is supported only when you omit the calendar identifier.

  • size number

    Specifies the maximum number of calendars to obtain. This parameter is supported only when you omit the calendar identifier.

application/json

Body

  • page object
    Hide page attributes Show page attributes object
    • from number

      Skips the specified number of items.

      Default value is 0.

    • size number

      Specifies the maximum number of items to obtain.

      Default value is 10000.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • calendars array[object] Required
      Hide calendars attributes Show calendars attributes object
      • calendar_id string Required
      • description string

        A description of the calendar.

      • job_ids array[string] Required

        An array of anomaly detection job identifiers.

    • count number Required
POST /_ml/calendars/{calendar_id}
GET _ml/calendars/planned-outages
resp = client.ml.get_calendars(
    calendar_id="planned-outages",
)
const response = await client.ml.getCalendars({
  calendar_id: "planned-outages",
});
response = client.ml.get_calendars(
  calendar_id: "planned-outages"
)
$resp = $client->ml()->getCalendars([
    "calendar_id" => "planned-outages",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/calendars/planned-outages"
client.ml().getCalendars(g -> g
    .calendarId("planned-outages")
);
















































Delete an anomaly detection job Generally available

DELETE /_ml/anomaly_detectors/{job_id}

All job configuration, model state and results are deleted. It is not currently possible to delete multiple jobs using wildcards or a comma separated list. If you delete a job that has a datafeed, the request first tries to delete the datafeed. This behavior is equivalent to calling the delete datafeed API with the same timeout and force parameters as the delete job request.

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • job_id string Required

    Identifier for the anomaly detection job.

Query parameters

  • force boolean

    Use to forcefully delete an opened job; this method is quicker than closing and deleting the job.

  • delete_user_annotations boolean

    Specifies whether annotations that have been added by the user should be deleted along with any auto-generated annotations when the job is reset.

  • wait_for_completion boolean

    Specifies whether the request should return immediately or wait until the job deletion completes.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_ml/anomaly_detectors/{job_id}
DELETE _ml/anomaly_detectors/total-requests
resp = client.ml.delete_job(
    job_id="total-requests",
)
const response = await client.ml.deleteJob({
  job_id: "total-requests",
});
response = client.ml.delete_job(
  job_id: "total-requests"
)
$resp = $client->ml()->deleteJob([
    "job_id" => "total-requests",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/anomaly_detectors/total-requests"
client.ml().deleteJob(d -> d
    .jobId("total-requests")
);
Response examples (200)
A successful response when deleting an anomaly detection job.
{
  "acknowledged": true
}
A successful response when deleting an anomaly detection job asynchronously. When the `wait_for_completion` query parameter is set to `false`, the response contains an identifier for the job deletion task.
{
  "task": "oTUltX4IQMOUUVeiohTt8A:39"
}
































Preview a datafeed Generally available

POST /_ml/datafeeds/{datafeed_id}/_preview

All methods and paths for this operation:

GET /_ml/datafeeds/_preview

POST /_ml/datafeeds/_preview
GET /_ml/datafeeds/{datafeed_id}/_preview
POST /_ml/datafeeds/{datafeed_id}/_preview

This API returns the first "page" of search results from a datafeed. You can preview an existing datafeed or provide configuration details for a datafeed and anomaly detection job in the API. The preview shows the structure of the data that will be passed to the anomaly detection engine. IMPORTANT: When Elasticsearch security features are enabled, the preview uses the credentials of the user that called the API. However, when the datafeed starts it uses the roles of the last user that created or updated the datafeed. To get a preview that accurately reflects the behavior of the datafeed, use the appropriate credentials. You can also use secondary authorization headers to supply the credentials.

Required authorization

  • Index privileges: read
  • Cluster privileges: manage_ml

Path parameters

  • datafeed_id string Required

    A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. NOTE: If you use this path parameter, you cannot provide datafeed or anomaly detection job configuration details in the request body.

Query parameters

  • start string | number

    The start time from where the datafeed preview should begin

  • end string | number

    The end time when the datafeed preview should stop

application/json

Body

  • datafeed_config object
    Hide datafeed_config attributes Show datafeed_config attributes object
    • aggregations object

      If set, the datafeed performs aggregation searches. Support for aggregations is limited and should be used only with low cardinality data.

    • chunking_config object
      Hide chunking_config attributes Show chunking_config attributes object
      • mode string Required

        Values are auto, manual, or off.

      • time_span string

        A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • datafeed_id string
    • delayed_data_check_config object
      Hide delayed_data_check_config attributes Show delayed_data_check_config attributes object
      • check_window string

        A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • enabled boolean Required

        Specifies whether the datafeed periodically checks for delayed data.

    • frequency string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • indices string | array[string]
    • indices_options object

      Controls how to deal with unavailable concrete indices (closed or missing), how wildcard expressions are expanded to actual indices (all, closed or open indices) and how to deal with wildcard expressions that resolve to no indices.

      Hide indices_options attributes Show indices_options attributes object
      • allow_no_indices boolean

        If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

      • expand_wildcards string | array[string]
      • ignore_unavailable boolean

        If true, missing or closed indices are not included in the response.

        Default value is false.

      • ignore_throttled boolean

        If true, concrete, expanded or aliased indices are ignored when frozen.

        Default value is true.

    • job_id string
    • max_empty_searches number

      If a real-time datafeed has never seen any data (including during any initial training period) then it will automatically stop itself and close its associated job after this many real-time searches that return no documents. In other words, it will stop after frequency times max_empty_searches of real-time operation. If not set then a datafeed with no end time that sees no data will remain started until it is explicitly stopped.

    • query object

      An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      External documentation
    • query_delay string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • runtime_mappings object
      Hide runtime_mappings attribute Show runtime_mappings attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • fields object

          For type composite

          Hide fields attribute Show fields attribute object
          • * object Additional properties
            Hide * attribute Show * attribute object
            • type string Required

              Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

        • fetch_fields array[object]

          For type lookup

          Hide fetch_fields attributes Show fetch_fields attributes object
          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • format string
        • format string

          A custom format for date type runtime fields.

        • input_field string

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • target_field string

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • target_index string
        • script object
          Hide script attributes Show script attributes object
          • source string | object

            One of:
          • id string
          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

            Hide params attribute Show params attribute object
            • * object Additional properties
          • lang string

            Any of:

            Values are painless, expression, mustache, or java.

          • options object
            Hide options attribute Show options attribute object
            • * string Additional properties
        • type string Required

          Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

    • script_fields object

      Specifies scripts that evaluate custom expressions and returns script fields to the datafeed. The detector configuration objects in a job can contain functions that use these script fields.

      Hide script_fields attribute Show script_fields attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • script object Required
          Hide script attributes Show script attributes object
          • source string | object

            One of:
          • id string
          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

            Hide params attribute Show params attribute object
            • * object Additional properties
          • lang string

            Any of:

            Values are painless, expression, mustache, or java.

          • options object
            Hide options attribute Show options attribute object
            • * string Additional properties
        • ignore_failure boolean
    • scroll_size number

      The size parameter that is used in Elasticsearch searches when the datafeed does not use aggregations. The maximum value is the value of index.max_result_window, which is 10,000 by default.

      Default value is 1000.

  • job_config object
    Hide job_config attributes Show job_config attributes object
    • allow_lazy_open boolean

      Advanced configuration option. Specifies whether this job can open when there is insufficient machine learning node capacity for it to be immediately assigned to a node.

      Default value is false.

    • analysis_config object Required
      Hide analysis_config attributes Show analysis_config attributes object
      • bucket_span string

        A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • categorization_analyzer string | object

        One of:
      • categorization_field_name string

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • categorization_filters array[string]

        If categorization_field_name is specified, you can also define optional filters. This property expects an array of regular expressions. The expressions are used to filter out matching sequences from the categorization field values. You can use this functionality to fine tune the categorization by excluding sequences from consideration when categories are defined. For example, you can exclude SQL statements that appear in your log files. This property cannot be used at the same time as categorization_analyzer. If you only want to define simple regular expression filters that are applied prior to tokenization, setting this property is the easiest method. If you also want to customize the tokenizer or post-tokenization filtering, use the categorization_analyzer property instead and include the filters as pattern_replace character filters. The effect is exactly the same.

      • detectors array[object] Required

        Detector configuration objects specify which data fields a job analyzes. They also specify which analytical functions are used. You can specify multiple detectors for a job. If the detectors array does not contain at least one detector, no analysis can occur and an error is returned.

        Hide detectors attributes Show detectors attributes object
        • by_field_name string

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • custom_rules array[object]

          Custom rules enable you to customize the way detectors operate. For example, a rule may dictate conditions under which results should be skipped. Kibana refers to custom rules as job rules.

          Hide custom_rules attributes Show custom_rules attributes object
          • actions array[string]

            The set of actions to be triggered when the rule applies. If more than one action is specified the effects of all actions are combined.

            Supported values include:

            • skip_result: The result will not be created. Unless you also specify skip_model_update, the model will be updated as usual with the corresponding series value.
            • skip_model_update: The value for that series will not be used to update the model. Unless you also specify skip_result, the results will be created as usual. This action is suitable when certain values are expected to be consistently anomalous and they affect the model in a way that negatively impacts the rest of the results.

            Values are skip_result or skip_model_update. Default value is ["skip_result"].

          • conditions array[object]

            An array of numeric conditions when the rule applies. A rule must either have a non-empty scope or at least one condition. Multiple conditions are combined together with a logical AND.

          • scope object

            A scope of series where the rule applies. A rule must either have a non-empty scope or at least one condition. By default, the scope includes all series. Scoping is allowed for any of the fields that are also specified in by_field_name, over_field_name, or partition_field_name.

        • detector_description string

          A description of the detector.

        • detector_index number

          A unique identifier for the detector. This identifier is based on the order of the detectors in the analysis_config, starting at zero. If you specify a value for this property, it is ignored.

        • exclude_frequent string

          Values are all, none, by, or over.

        • field_name string

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • function string

          The analysis function that is used. For example, count, rare, mean, min, max, or sum.

        • over_field_name string

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • partition_field_name string

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • use_null boolean

          Defines whether a new series is used as the null series when there is no value for the by or partition fields.

          Default value is false.

      • influencers array[string]

        A comma separated list of influencer field names. Typically these can be the by, over, or partition fields that are used in the detector configuration. You might also want to use a field name that is not specifically named in a detector, but is available as part of the input data. When you use multiple detectors, the use of influencers is recommended as it aggregates results for each influencer entity.

      • latency string

        A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • model_prune_window string

        A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • multivariate_by_fields boolean

        This functionality is reserved for internal use. It is not supported for use in customer environments and is not subject to the support SLA of official GA features. If set to true, the analysis will automatically find correlations between metrics for a given by field value and report anomalies when those correlations cease to hold. For example, suppose CPU and memory usage on host A is usually highly correlated with the same metrics on host B. Perhaps this correlation occurs because they are running a load-balanced application. If you enable this property, anomalies will be reported when, for example, CPU usage on host A is high and the value of CPU usage on host B is low. That is to say, you’ll see an anomaly when the CPU of host A is unusual given the CPU of host B. To use the multivariate_by_fields property, you must also specify by_field_name in your detector.

      • per_partition_categorization object
        Hide per_partition_categorization attributes Show per_partition_categorization attributes object
        • enabled boolean

          To enable this setting, you must also set the partition_field_name property to the same value in every detector that uses the keyword mlcategory. Otherwise, job creation fails.

        • stop_on_warn boolean

          This setting can be set to true only if per-partition categorization is enabled. If true, both categorization and subsequent anomaly detection stops for partitions where the categorization status changes to warn. This setting makes it viable to have a job where it is expected that categorization works well for some partitions but not others; you do not pay the cost of bad categorization forever in the partitions where it works badly.

      • summary_count_field_name string

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • analysis_limits object
      Hide analysis_limits attributes Show analysis_limits attributes object
      • categorization_examples_limit number

        The maximum number of examples stored per category in memory and in the results data store. If you increase this value, more examples are available, however it requires that you have more storage available. If you set this value to 0, no examples are stored. NOTE: The categorization_examples_limit applies only to analysis that uses categorization.

        Default value is 4.

      • model_memory_limit number | string

    • background_persist_interval string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • custom_settings object

      Custom metadata about the job

    • daily_model_snapshot_retention_after_days number

      Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies a period of time (in days) after which only the first snapshot per day is retained. This period is relative to the timestamp of the most recent snapshot for this job.

      Default value is 1.

    • data_description object Required
      Hide data_description attributes Show data_description attributes object
      • format string

        Only JSON format is supported at this time.

      • time_field string

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • time_format string

        The time format, which can be epoch, epoch_ms, or a custom pattern. The value epoch refers to UNIX or Epoch time (the number of seconds since 1 Jan 1970). The value epoch_ms indicates that time is measured in milliseconds since the epoch. The epoch and epoch_ms time formats accept either integer or real values. Custom patterns must conform to the Java DateTimeFormatter class. When you use date-time formatting patterns, it is recommended that you provide the full date, time and time zone. For example: yyyy-MM-dd'T'HH:mm:ssX. If the pattern that you specify is not sufficient to produce a complete timestamp, job creation fails.

        Default value is epoch.

      • field_delimiter string
    • datafeed_config object
      Hide datafeed_config attributes Show datafeed_config attributes object
      • aggregations object

        If set, the datafeed performs aggregation searches. Support for aggregations is limited and should be used only with low cardinality data.

      • chunking_config object
        Hide chunking_config attributes Show chunking_config attributes object
        • mode string Required

          Values are auto, manual, or off.

        • time_span string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • datafeed_id string
      • delayed_data_check_config object
        Hide delayed_data_check_config attributes Show delayed_data_check_config attributes object
        • check_window string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • enabled boolean Required

          Specifies whether the datafeed periodically checks for delayed data.

      • frequency string

        A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • indices string | array[string]
      • indices_options object

        Controls how to deal with unavailable concrete indices (closed or missing), how wildcard expressions are expanded to actual indices (all, closed or open indices) and how to deal with wildcard expressions that resolve to no indices.

        Hide indices_options attributes Show indices_options attributes object
        • allow_no_indices boolean

          If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

        • expand_wildcards string | array[string]
        • ignore_unavailable boolean

          If true, missing or closed indices are not included in the response.

          Default value is false.

        • ignore_throttled boolean

          If true, concrete, expanded or aliased indices are ignored when frozen.

          Default value is true.

      • job_id string
      • max_empty_searches number

        If a real-time datafeed has never seen any data (including during any initial training period) then it will automatically stop itself and close its associated job after this many real-time searches that return no documents. In other words, it will stop after frequency times max_empty_searches of real-time operation. If not set then a datafeed with no end time that sees no data will remain started until it is explicitly stopped.

      • query object

        An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

        External documentation
      • query_delay string

        A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • runtime_mappings object
        Hide runtime_mappings attribute Show runtime_mappings attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • fields object

            For type composite

            Hide fields attribute Show fields attribute object
            • * object Additional properties
              Hide * attribute Show * attribute object
              • type string Required

                Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

          • fetch_fields array[object]

            For type lookup

            Hide fetch_fields attributes Show fetch_fields attributes object
            • field string Required

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • format string
          • format string

            A custom format for date type runtime fields.

          • input_field string

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • target_field string

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • target_index string
          • script object
            Hide script attributes Show script attributes object
            • id string
            • params object

              Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

              Hide params attribute Show params attribute object
              • * object Additional properties
            • lang string

              Any of:

              Values are painless, expression, mustache, or java.

            • options object
              Hide options attribute Show options attribute object
              • * string Additional properties
          • type string Required

            Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

      • script_fields object

        Specifies scripts that evaluate custom expressions and returns script fields to the datafeed. The detector configuration objects in a job can contain functions that use these script fields.

        Hide script_fields attribute Show script_fields attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • script object Required
            Hide script attributes Show script attributes object
            • id string
            • params object

              Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

              Hide params attribute Show params attribute object
              • * object Additional properties
            • lang string

              Any of:

              Values are painless, expression, mustache, or java.

            • options object
              Hide options attribute Show options attribute object
              • * string Additional properties
          • ignore_failure boolean
      • scroll_size number

        The size parameter that is used in Elasticsearch searches when the datafeed does not use aggregations. The maximum value is the value of index.max_result_window, which is 10,000 by default.

        Default value is 1000.

    • description string

      A description of the job.

    • groups array[string]

      A list of job groups. A job can belong to no groups or many.

    • job_id string
    • job_type string

      Reserved for future use, currently set to anomaly_detector.

    • model_plot_config object
      Hide model_plot_config attributes Show model_plot_config attributes object
      • annotations_enabled boolean Generally available

        If true, enables calculation and storage of the model change annotations for each entity that is being analyzed.

        Default value is true.

      • enabled boolean

        If true, enables calculation and storage of the model bounds for each entity that is being analyzed.

        Default value is false.

      • terms string

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • model_snapshot_retention_days number

      Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies the maximum period of time (in days) that snapshots are retained. This period is relative to the timestamp of the most recent snapshot for this job. The default value is 10, which means snapshots ten days older than the newest snapshot are deleted.

      Default value is 10.

    • renormalization_window_days number

      Advanced configuration option. The period over which adjustments to the score are applied, as new data is seen. The default value is the longer of 30 days or 100 bucket_spans.

    • results_index_name string
    • results_retention_days number

      Advanced configuration option. The period of time (in days) that results are retained. Age is calculated relative to the timestamp of the latest bucket result. If this property has a non-null value, once per day at 00:30 (server time), results that are the specified number of days older than the latest bucket result are deleted from Elasticsearch. The default value is null, which means all results are retained. Annotations generated by the system also count as results for retention purposes; they are deleted after the same number of days as results. Annotations added by users are retained forever.

Responses

  • 200 application/json
POST /_ml/datafeeds/{datafeed_id}/_preview
GET _ml/datafeeds/datafeed-high_sum_total_sales/_preview
resp = client.ml.preview_datafeed(
    datafeed_id="datafeed-high_sum_total_sales",
)
const response = await client.ml.previewDatafeed({
  datafeed_id: "datafeed-high_sum_total_sales",
});
response = client.ml.preview_datafeed(
  datafeed_id: "datafeed-high_sum_total_sales"
)
$resp = $client->ml()->previewDatafeed([
    "datafeed_id" => "datafeed-high_sum_total_sales",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/datafeeds/datafeed-high_sum_total_sales/_preview"
client.ml().previewDatafeed(p -> p
    .datafeedId("datafeed-high_sum_total_sales")
);


















































































Get trained models usage info Generally available

GET /_ml/trained_models/{model_id}/_stats

All methods and paths for this operation:

GET /_ml/trained_models/_stats

GET /_ml/trained_models/{model_id}/_stats

You can get usage information for multiple trained models in a single API request by using a comma-separated list of model IDs or a wildcard expression.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • model_id string | array[string] Required

    The unique identifier of the trained model or a model alias. It can be a comma-separated list or a wildcard expression.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request:

    • Contains wildcard expressions and there are no models that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, it returns an empty array when there are no matches and the subset of results when there are partial matches.

  • from number

    Skips the specified number of models.

  • size number

    Specifies the maximum number of models to obtain.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • count number Required

      The total number of trained model statistics that matched the requested ID patterns. Could be higher than the number of items in the trained_model_stats array as the size of the array is restricted by the supplied size parameter.

    • trained_model_stats array[object] Required

      An array of trained model statistics, which are sorted by the model_id value in ascending order.

      Hide trained_model_stats attributes Show trained_model_stats attributes object
      • deployment_stats object
        Hide deployment_stats attributes Show deployment_stats attributes object
        • adaptive_allocations object
          Hide adaptive_allocations attributes Show adaptive_allocations attributes object
          • enabled boolean Required

            If true, adaptive_allocations is enabled

          • min_number_of_allocations number

            Specifies the minimum number of allocations to scale to. If set, it must be greater than or equal to 0. If not defined, the deployment scales to 0.

          • max_number_of_allocations number

            Specifies the maximum number of allocations to scale to. If set, it must be greater than or equal to min_number_of_allocations.

        • allocation_status object
          Hide allocation_status attributes Show allocation_status attributes object
          • allocation_count number Required

            The current number of nodes where the model is allocated.

          • state string Required

            Values are started, starting, or fully_allocated.

          • target_allocation_count number Required

            The desired number of nodes for model allocation.

        • cache_size number | string

        • deployment_id string Required
        • error_count number

          The sum of error_count for all nodes in the deployment.

        • inference_count number

          The sum of inference_count for all nodes in the deployment.

        • model_id string Required
        • nodes array[object] Required

          The deployment stats for each node that currently has the model allocated. In serverless, stats are reported for a single unnamed virtual node.

          Hide nodes attributes Show nodes attributes object
          • average_inference_time_ms
          • average_inference_time_ms_last_minute
          • average_inference_time_ms_excluding_cache_hits
          • error_count number

            The number of errors when evaluating the trained model.

          • inference_count number

            The total number of inference calls made against this node for this model.

          • inference_cache_hit_count number
          • inference_cache_hit_count_last_minute number
          • last_access
          • number_of_allocations number

            The number of allocations assigned to this node.

          • number_of_pending_requests number

            The number of inference requests queued to be processed.

          • peak_throughput_per_minute number Required
          • rejected_execution_count number

            The number of inference requests that were not processed because the queue was full.

          • routing_state object Required
          • start_time
          • threads_per_allocation number

            The number of threads used by each allocation during inference.

          • throughput_last_minute number Required
          • timeout_count number

            The number of inference requests that timed out before being processed.

        • number_of_allocations number

          The number of allocations requested.

        • peak_throughput_per_minute number Required
        • priority string Required

          Values are normal or low.

        • queue_capacity number

          The number of inference requests that can be queued before new requests are rejected.

        • rejected_execution_count number

          The sum of rejected_execution_count for all nodes in the deployment. Individual nodes reject an inference request if the inference queue is full. The queue size is controlled by the queue_capacity setting in the start trained model deployment API.

        • reason string

          The reason for the current deployment state. Usually only populated when the model is not deployed to a node.

        • start_time number

          Time unit for milliseconds

        • state string

          Values are started, starting, stopping, or failed.

        • threads_per_allocation number

          The number of threads used be each allocation during inference.

        • timeout_count number

          The sum of timeout_count for all nodes in the deployment.

      • inference_stats object
        Hide inference_stats attributes Show inference_stats attributes object
        • cache_miss_count number Required

          The number of times the model was loaded for inference and was not retrieved from the cache. If this number is close to the inference_count, the cache is not being appropriately used. This can be solved by increasing the cache size or its time-to-live (TTL). Refer to general machine learning settings for the appropriate settings.

        • failure_count number Required

          The number of failures when using the model for inference.

        • inference_count number Required

          The total number of times the model has been called for inference. This is across all inference contexts, including all pipelines.

        • missing_all_fields_count number Required

          The number of inference calls where all the training features for the model were missing.

        • timestamp number

          Time unit for milliseconds

      • ingest object

        A collection of ingest stats for the model across all nodes. The values are summations of the individual node statistics. The format matches the ingest section in the nodes stats API.

        Hide ingest attribute Show ingest attribute object
        • * object Additional properties
      • model_id string Required
      • model_size_stats object Required
        Hide model_size_stats attributes Show model_size_stats attributes object
      • pipeline_count number Required

        The number of ingest pipelines that currently refer to the model.

GET /_ml/trained_models/{model_id}/_stats
GET _ml/trained_models/_stats
resp = client.ml.get_trained_models_stats()
const response = await client.ml.getTrainedModelsStats();
response = client.ml.get_trained_models_stats
$resp = $client->ml()->getTrainedModelsStats();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/trained_models/_stats"
client.ml().getTrainedModelsStats(g -> g);