Perform streaming inference
Added in 8.16.0
Get real-time responses for completion tasks by delivering answers incrementally, reducing response times during computation. This API works only with the completion task type.
IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
This API requires the monitor_inference
cluster privilege (the built-in inference_admin
and inference_user
roles grant this privilege). You must use a client that supports streaming.
Path parameters
-
inference_id
string Required The unique identifier for the inference endpoint.
Body
input
string | array[string] Required The text on which you want to perform the inference task. It can be a single string or an array.
NOTE: Inference endpoints for the completion task type currently only support a single string as input.
-
task_settings
object
curl \
--request POST 'http://api.example.com/_inference/completion/{inference_id}/_stream' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"input\": \"What is Elastic?\"\n}"'
{
"input": "What is Elastic?"
}
{}