0% found this document useful (0 votes)
2K views

API Reference - OpenAI API

The document provides information about getting started with the OpenAI API including authentication, making requests, endpoints, and streaming responses. It also describes audio and speech generation capabilities.

Uploaded by

ujjaval.rana
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2K views

API Reference - OpenAI API

The document provides information about getting started with the OpenAI API including authentication, making requests, endpoints, and streaming responses. It also describes audio and speech generation capabilities.

Uploaded by

ujjaval.rana
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Documentation API reference Forum Help

Search K
Introduction
GETTING STARTED
You can interact with the API through HTTP requests from any language, via our official Python bindings,
Introduction our official Node.js library, or a community-maintained library.
Authentication
To install the official Python bindings, run the following command:
Making requests

Streaming pip install openai

ENDPOINTS
To install the official Node.js library, run the following command in your Node.js project directory:
Audio

Chat npm install openai@^4.0.0


Embeddings

Fine-tuning

Files

Images Authentication
Create image
The OpenAI API uses API keys for authentication. Visit your API Keys page to retrieve the API key you'll use
Create image edit in your requests.
Create image variation
Remember that your API key is a secret! Do not share it with others or expose it in any client-side code
The image object (browsers, apps). Production requests must be routed through your own backend server where your API key
Models can be securely loaded from an environment variable or key management service.

Moderations All API requests should include your API key in an Authorization HTTP header as follows:

BETA
Authorization: Bearer OPENAI_API_KEY
Assistants

Threads
Organization (optional)
Messages
For users who belong to multiple organizations, you can pass a header to specify which organization is used
Runs
for an API request. Usage from these API requests will count as usage for the specified organization.

LEGACY Example curl command:

Completions
1 curl https://api.openai.com/v1/models \
2 -H "Authorization: Bearer $OPENAI_API_KEY" \
3 -H "OpenAI-Organization: org-onnseWcOjHF4Hp81pNZTgGNH"

Example with the openai Python package:

1 from openai import OpenAI


2
3 client = OpenAI(
4 organization='org-onnseWcOjHF4Hp81pNZTgGNH',
5 )

Example with the openai Node.js package:

1 import OpenAI from "openai";


2
3 const openai = new OpenAI({
4 organization: 'org-onnseWcOjHF4Hp81pNZTgGNH',
5 });

Organization IDs can be found on your Organization settings page.

Making requests
You can paste the command below into your terminal to run your first API request. Make sure to replace
$OPENAI_API_KEY with your secret API key.

1 curl https://api.openai.com/v1/chat/completions \
2 -H "Content-Type: application/json" \
3 -H "Authorization: Bearer $OPENAI_API_KEY" \
4 -d '{
5 "model": "gpt-3.5-turbo",
6 "messages": [{"role": "user", "content": "Say this is a test!"}],
7 "temperature": 0.7
8 }'

This request queries the gpt-3.5-turbo model (which under the hood points to the latest gpt-3.5-turbo
model variant) to complete the text starting with a prompt of "Say this is a test". You should get a response
back that resembles the following:

1 {
2 "id": "chatcmpl-abc123",
3 "object": "chat.completion",
4 "created": 1677858242,
5 "model": "gpt-3.5-turbo-1106",
6 "usage": {
7 "prompt_tokens": 13,
8 "completion_tokens": 7,
9 "total_tokens": 20
10 },
11 "choices": [
12 {
13 "message": {
14 "role": "assistant",
15 "content": "\n\nThis is a test!"
16 },
17 "logprobs": null,
18 "finish_reason": "stop",
19 "index": 0
20 }
21 ]
22 }

Now that you've generated your first chat completion, let's break down the response object. We can see the
finish_reason is stop which means the API returned the full chat completion generated by the model
without running into any limits. In the choices list, we only generated a single message but you can set the
n parameter to generate multiple messages choices.

Streaming
The OpenAI API provides the ability to stream responses back to a client in order to allow partial results for
certain requests. To achieve this, we follow the Server-sent events standard.

Our official Node and Python libraries handle Server-sent events for you. In Python, a streaming request
looks like:

1 from openai import OpenAI


2
3 client = OpenAI()
4
5 stream = client.chat.completions.create(
6 model="gpt-4",
7 messages=[{"role": "user", "content": "Say this is a test"}],
8 stream=True,
9 )
10 for chunk in stream:
11 if chunk.choices[0].delta.content is not None:
12 print(chunk.choices[0].delta.content, end="")

In Node / Typescript, a streaming request looks like:

1 import OpenAI from "openai";


2
3 const openai = new OpenAI();
4
5 async function main() {
6 const stream = await openai.chat.completions.create({
7 model: "gpt-4",
8 messages: [{ role: "user", content: "Say this is a test" }],
9 stream: true,
10 });
11 for await (const chunk of stream) {
12 process.stdout.write(chunk.choices[0]?.delta?.content || "");
13 }
14 }
15
16 main();

Parsing Server-sent events

Parsing Server-sent events is non-trivial and should be done with caution. Simple strategies like splitting by
a new line may result in parsing errors. We recommend using existing client libraries when possible.

Audio
Learn how to turn audio into text or text into audio.

Related guide: Speech to text

Create speech
POST https://api.openai.com/v1/audio/speech Example request python Copy

Generates audio from the input text. 1 from pathlib import Path
2 import openai
3
Request body 4 speech_file_path = Path(__file__).parent / "speech.m
5 response = openai.audio.speech.create(
model string Required 6 model="tts-1",
One of the available TTS models: tts-1 or tts-1-hd 7 voice="alloy",
8 input="The quick brown fox jumped over the lazy do
9 )
input string Required 10 response.stream_to_file(speech_file_path)
The text to generate audio for. The maximum length is 4096 characters.

voice string Required


The voice to use when generating the audio. Supported voices are alloy , echo , fable ,
onyx , nova , and shimmer . Previews of the voices are available in the Text to speech guide.

response_format string Optional Defaults to mp3


The format to audio in. Supported formats are mp3 , opus , aac , and flac .

speed number Optional Defaults to 1


The speed of the generated audio. Select a value from 0.25 to 4.0 . 1.0 is the default.

Returns

The audio file content.

Create transcription
POST https://api.openai.com/v1/audio/transcriptions Example request python Copy

Transcribes audio into the input language. 1 from openai import OpenAI
2 client = OpenAI()
3
Request body 4 audio_file = open("speech.mp3", "rb")
5 transcript = client.audio.transcriptions.create(
file file Required 6 model="whisper-1",
The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, 7 file=audio_file

mpga, m4a, ogg, wav, or webm. 8 )

model string Required Response Copy


ID of the model to use. Only whisper-1 is currently available.
1 {
2 "text": "Imagine the wildest idea that you've ever
language string Optional
3 }
The language of the input audio. Supplying the input language in ISO-639-1 format will improve
accuracy and latency.

prompt string Optional


An optional text to guide the model's style or continue a previous audio segment. The prompt
should match the audio language.

response_format string Optional Defaults to json


The format of the transcript output, in one of these options: json , text , srt ,
verbose_json , or vtt .

temperature number Optional Defaults to 0


The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more
random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the
model will use log probability to automatically increase the temperature until certain thresholds
are hit.

Returns

The transcribed text.

Create translation
POST https://api.openai.com/v1/audio/translations Example request python Copy

Translates audio into English. 1 from openai import OpenAI


2 client = OpenAI()
3
Request body 4 audio_file = open("speech.mp3", "rb")
5 transcript = client.audio.translations.create(
file file Required 6 model="whisper-1",
The audio file object (not file name) translate, in one of these formats: flac, mp3, mp4, mpeg, 7 file=audio_file
mpga, m4a, ogg, wav, or webm. 8 )

model string Required Response Copy


ID of the model to use. Only whisper-1 is currently available.
1 {
2 "text": "Hello, my name is Wolfgang and I come from
prompt string Optional
3 }
An optional text to guide the model's style or continue a previous audio segment. The prompt
should be in English.

response_format string Optional Defaults to json


The format of the transcript output, in one of these options: json , text , srt ,
verbose_json , or vtt .

temperature number Optional Defaults to 0


The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more
random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the
model will use log probability to automatically increase the temperature until certain thresholds
are hit.

Returns
The translated text.

Chat
Given a list of messages comprising a conversation, the model will return a response.

Related guide: Chat Completions

Create chat completion


Default Image input Streaming Functions Logprobs

POST https://api.openai.com/v1/chat/completions
Example request gpt-3.5-turbo python Copy
Creates a model response for the given chat conversation.
1 from openai import OpenAI
2 client = OpenAI()
Request body 3
4 completion = client.chat.completions.create(
5 model="gpt-3.5-turbo",
messages array Required
6 messages=[
A list of messages comprising the conversation so far. Example Python code.
7 {"role": "system", "content": "You are a helpful
Show possible types 8 {"role": "user", "content": "Hello!"}
9 ]
10 )
model string Required
11
ID of the model to use. See the model endpoint compatibility table for details on which models
12 print(completion.choices[0].message)
work with the Chat API.

frequency_penalty number or null Optional Defaults to 0


Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing Response Copy
frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
1 {
See more information about frequency and presence penalties. 2 "id": "chatcmpl-123",
3 "object": "chat.completion",
logit_bias map Optional Defaults to null 4 "created": 1677652288,
Modify the likelihood of specified tokens appearing in the completion. 5 "model": "gpt-3.5-turbo-0613",
6 "system_fingerprint": "fp_44709d6fcb",
Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an 7 "choices": [{
associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated 8 "index": 0,
by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 9 "message": {
should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban 10 "role": "assistant",
or exclusive selection of the relevant token. 11 "content": "\n\nHello there, how may I assist
12 },
13 "logprobs": null,
logprobs boolean or null Optional Defaults to false
14 "finish_reason": "stop"
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities
15 }],
of each output token returned in the content of message . This option is currently not
16 "usage": {
available on the gpt-4-vision-preview model.
17 "prompt_tokens": 9,
18 "completion_tokens": 12,
top_logprobs integer or null Optional 19 "total_tokens": 21
An integer between 0 and 5 specifying the number of most likely tokens to return at each token 20 }
position, each with an associated log probability. logprobs must be set to true if this 21 }
parameter is used.

max_tokens integer or null Optional


The maximum number of tokens that can be generated in the chat completion.

The total length of input tokens and generated tokens is limited by the model's context length.
Example Python code for counting tokens.

n integer or null Optional Defaults to 1


How many chat completion choices to generate for each input message. Note that you will be
charged based on the number of generated tokens across all of the choices. Keep n as 1 to
minimize costs.

presence_penalty number or null Optional Defaults to 0


Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear
in the text so far, increasing the model's likelihood to talk about new topics.

See more information about frequency and presence penalties.

response_format object Optional


An object specifying the format that the model must output. Compatible with GPT-4 Turbo and
gpt-3.5-turbo-1106 .

Setting to { "type": "json_object" } enables JSON mode, which guarantees the message
the model generates is valid JSON.

Important: when using JSON mode, you must also instruct the model to produce JSON yourself
via a system or user message. Without this, the model may generate an unending stream of
whitespace until the generation reaches the token limit, resulting in a long-running and seemingly
"stuck" request. Also note that the message content may be partially cut off if
finish_reason="length" , which indicates the generation exceeded max_tokens or the
conversation exceeded the max context length.
Show properties

seed integer or null Optional


This feature is in Beta. If specified, our system will make a best effort to sample deterministically,
such that repeated requests with the same seed and parameters should return the same result.
Determinism is not guaranteed, and you should refer to the system_fingerprint response
parameter to monitor changes in the backend.

stop string / array / null Optional Defaults to null


Up to 4 sequences where the API will stop generating further tokens.

stream boolean or null Optional Defaults to false


If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-
sent events as they become available, with the stream terminated by a data: [DONE]
message. Example Python code.

temperature number or null Optional Defaults to 1


What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output
more random, while lower values like 0.2 will make it more focused and deterministic.

We generally recommend altering this or top_p but not both.

top_p number or null Optional Defaults to 1


An alternative to sampling with temperature, called nucleus sampling, where the model considers
the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the
top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

tools array Optional


A list of tools the model may call. Currently, only functions are supported as a tool. Use this to
provide a list of functions the model may generate JSON inputs for.
Show properties

tool_choice string or object Optional


Controls which (if any) function is called by the model. none means the model will not call a
function and instead generates a message. auto means the model can pick between
generating a message or calling a function. Specifying a particular function via {"type":
"function", "function": {"name": "my_function"}} forces the model to call that
function.

none is the default when no functions are present. auto is the default if functions are present.
Show possible types

user string Optional


A unique identifier representing your end-user, which can help OpenAI to monitor and detect
abuse. Learn more.

function_call Deprecated string or object Optional


Deprecated in favor of tool_choice .

Controls which (if any) function is called by the model. none means the model will not call a
function and instead generates a message. auto means the model can pick between
generating a message or calling a function. Specifying a particular function via {"name":
"my_function"} forces the model to call that function.

none is the default when no functions are present. auto is the default if functions are present.
Show possible types

functions Deprecated array Optional


Deprecated in favor of tools .

A list of functions the model may generate JSON inputs for.


Show properties

Returns

Returns a chat completion object, or a streamed sequence of chat completion chunk objects if
the request is streamed.

The chat completion object


Represents a chat completion response returned by model, based on the provided The chat completion object Copy
input.
1 {
2 "id": "chatcmpl-123",
id string
3 "object": "chat.completion",
A unique identifier for the chat completion.
4 "created": 1677652288,
5 "model": "gpt-3.5-turbo-0613",
choices array 6 "system_fingerprint": "fp_44709d6fcb",
A list of chat completion choices. Can be more than one if n is greater than 1. 7 "choices": [{
Show properties 8 "index": 0,
9 "message": {
10 "role": "assistant",
created integer 11 "content": "\n\nHello there, how may I assist
The Unix timestamp (in seconds) of when the chat completion was created. 12 },
13 "logprobs": null,
14 "finish_reason": "stop"
model string
15 }],
The model used for the chat completion.
16 "usage": {
17 "prompt_tokens": 9,
system_fingerprint string 18 "completion_tokens": 12,
This fingerprint represents the backend configuration that the model runs with. 19 "total_tokens": 21
20 }
Can be used in conjunction with the seed request parameter to understand when backend
21 }
changes have been made that might impact determinism.

object string
The object type, which is always chat.completion .

usage object
Usage statistics for the completion request.
Show properties

The chat completion chunk object


Represents a streamed chunk of a chat completion response returned by model, The chat completion chunk object Copy
based on the provided input.
1 {"id":"chatcmpl-123","object":"chat.completion.chunk
2
id string
3 {"id":"chatcmpl-123","object":"chat.completion.chunk
A unique identifier for the chat completion. Each chunk has the same ID.
4
5 {"id":"chatcmpl-123","object":"chat.completion.chunk
choices array 6
A list of chat completion choices. Can be more than one if n is greater than 1. 7 ....
Show properties 8
9 {"id":"chatcmpl-123","object":"chat.completion.chunk
10
created integer 11 {"id":"chatcmpl-123","object":"chat.completion.chunk
The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the 12
same timestamp. 13 {"id":"chatcmpl-123","object":"chat.completion.chunk

model string
The model to generate the completion.

system_fingerprint string
This fingerprint represents the backend configuration that the model runs with. Can be used in
conjunction with the seed request parameter to understand when backend changes have been
made that might impact determinism.

object string
The object type, which is always chat.completion.chunk .

Embeddings
Get a vector representation of a given input that can be easily consumed by machine learning models and
algorithms.

Related guide: Embeddings

Create embeddings
POST https://api.openai.com/v1/embeddings Example request python Copy

Creates an embedding vector representing the input text. 1 from openai import OpenAI
2 client = OpenAI()
3
Request body 4 client.embeddings.create(
5 model="text-embedding-ada-002",
input string or array Required 6 input="The food was delicious and the waiter...",
Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single 7 encoding_format="float"

request, pass an array of strings or array of token arrays. The input must not exceed the max input 8 )

tokens for the model (8192 tokens for text-embedding-ada-002 ), cannot be an empty string,
and any array must be 2048 dimensions or less. Example Python code for counting tokens.
Show possible types Response Copy

1 {
model string Required
2 "object": "list",
ID of the model to use. You can use the List models API to see all of your available models, or see
3 "data": [
our Model overview for descriptions of them.
4 {
5 "object": "embedding",
encoding_format string Optional Defaults to float 6 "embedding": [
The format to return the embeddings in. Can be either float or base64 . 7 0.0023064255,
8 -0.009327292,
9 .... (1536 floats total for ada-002)
dimensions integer Optional 10 -0.0028842222,
The number of dimensions the resulting output embeddings should have. Only supported in 11 ],
text-embedding-3 and later models. 12 "index": 0
13 }
user string Optional 14 ],
A unique identifier representing your end-user, which can help OpenAI to monitor and detect 15 "model": "text-embedding-ada-002",
abuse. Learn more. 16 "usage": {
17 "prompt_tokens": 8,
18 "total_tokens": 8
19 }
Returns 20 }

A list of embedding objects.

The embedding object


Represents an embedding vector returned by embedding endpoint. The embedding object Copy

index integer 1 {

The index of the embedding in the list of embeddings. 2 "object": "embedding",


3 "embedding": [
4 0.0023064255,
embedding array 5 -0.009327292,
The embedding vector, which is a list of floats. The length of vector depends on the model as 6 .... (1536 floats total for ada-002)
listed in the embedding guide. 7 -0.0028842222,
8 ],
9 "index": 0
object string
10 }
The object type, which is always "embedding".

Fine-tuning
Manage fine-tuning jobs to tailor a model to your specific training data.

Related guide: Fine-tune models


Create fine-tuning job
Default Epochs Validation file

POST https://api.openai.com/v1/fine_tuning/jobs
Example request python Copy
Creates a fine-tuning job which begins the process of creating a new model from a
1 from openai import OpenAI
given dataset.
2 client = OpenAI()
3
Response includes details of the enqueued job including job status and the name of
4 client.fine_tuning.jobs.create(
the fine-tuned models once complete. 5 training_file="file-abc123",
6 model="gpt-3.5-turbo"
Learn more about fine-tuning 7 )

Request body Response Copy

model string Required 1 {


The name of the model to fine-tune. You can select one of the supported models. 2 "object": "fine_tuning.job",
3 "id": "ftjob-abc123",
4 "model": "gpt-3.5-turbo-0613",
training_file string Required
5 "created_at": 1614807352,
The ID of an uploaded file that contains training data.
6 "fine_tuned_model": null,
See upload file for how to upload a file. 7 "organization_id": "org-123",
8 "result_files": [],
Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the
9 "status": "queued",
purpose fine-tune .
10 "validation_file": null,
See the fine-tuning guide for more details. 11 "training_file": "file-abc123",
12 }

hyperparameters object Optional


The hyperparameters used for the fine-tuning job.
Show properties

suffix string or null Optional Defaults to null


A string of up to 18 characters that will be added to your fine-tuned model name.

For example, a suffix of "custom-model-name" would produce a model name like ft:gpt-
3.5-turbo:openai:custom-model-name:7p4lURel .

validation_file string or null Optional


The ID of an uploaded file that contains validation data.

If you provide this file, the data is used to generate validation metrics periodically during fine-
tuning. These metrics can be viewed in the fine-tuning results file. The same data should not be
present in both train and validation files.

Your dataset must be formatted as a JSONL file. You must upload your file with the purpose
fine-tune .

See the fine-tuning guide for more details.

Returns

A fine-tuning.job object.

List fine-tuning jobs


GET https://api.openai.com/v1/fine_tuning/jobs Example request python Copy

List your organization's fine-tuning jobs 1 from openai import OpenAI


2 client = OpenAI()
3
Query parameters 4 client.fine_tuning.jobs.list()

after string Optional


Response Copy
Identifier for the last job from the previous pagination request.

1 {
limit integer Optional Defaults to 20 2 "object": "list",
Number of fine-tuning jobs to retrieve. 3 "data": [
4 {
5 "object": "fine_tuning.job.event",
6 "id": "ft-event-TjX0lMfOniCZX64t9PUQT5hn",
Returns 7 "created_at": 1689813489,
8 "level": "warn",
A list of paginated fine-tuning job objects. 9 "message": "Fine tuning process stopping due t
10 "data": null,
11 "type": "message"
12 },
13 { ... },
14 { ... }
15 ], "has_more": true
16 }

List fine-tuning events


GET https://api.openai.com/v1/fine_tuning/jobs/{fine_tuning_job_id}/events Example request python Copy

Get status updates for a fine-tuning job. 1 from openai import OpenAI
2 client = OpenAI()
3
Path parameters 4 client.fine_tuning.jobs.list_events(
5 fine_tuning_job_id="ftjob-abc123",
fine_tuning_job_id string Required 6 limit=2
The ID of the fine-tuning job to get events for. 7 )

Response Copy
Query parameters 1 {
2 "object": "list",
after string Optional 3 "data": [
Identifier for the last event from the previous pagination request. 4 {
5 "object": "fine_tuning.job.event",
6 "id": "ft-event-ddTJfwuMVpfLXseO0Am0Gqjm",
limit integer Optional Defaults to 20
7 "created_at": 1692407401,
Number of events to retrieve.
8 "level": "info",
9 "message": "Fine tuning job successfully compl
10 "data": null,
Returns 11 "type": "message"
12 },
13 {
A list of fine-tuning event objects.
14 "object": "fine_tuning.job.event",
15 "id": "ft-event-tyiGuB72evQncpH87xe505Sv",
16 "created_at": 1692407400,
17 "level": "info",
18 "message": "New fine-tuned model created: ft:g
19 "data": null,
20 "type": "message"
21 }
22 ],
23 "has_more": true
24 }

Retrieve fine-tuning job


GET https://api.openai.com/v1/fine_tuning/jobs/{fine_tuning_job_id} Example request python Copy

Get info about a fine-tuning job. 1 from openai import OpenAI


2 client = OpenAI()
Learn more about fine-tuning 3
4 client.fine_tuning.jobs.retrieve("ftjob-abc123")

Path parameters
Response Copy
fine_tuning_job_id string Required
1 {
The ID of the fine-tuning job.
2 "object": "fine_tuning.job",
3 "id": "ftjob-abc123",
4 "model": "davinci-002",
Returns 5 "created_at": 1692661014,
6 "finished_at": 1692661190,
The fine-tuning object with the given ID. 7 "fine_tuned_model": "ft:davinci-002:my-org:custom_
8 "organization_id": "org-123",
9 "result_files": [
10 "file-abc123"
11 ],
12 "status": "succeeded",
13 "validation_file": null,
14 "training_file": "file-abc123",
15 "hyperparameters": {
16 "n_epochs": 4,
17 },
18 "trained_tokens": 5768
19 }

Cancel fine-tuning
POST https://api.openai.com/v1/fine_tuning/jobs/{fine_tuning_job_id}/cancel Example request python Copy

Immediately cancel a fine-tune job. 1 from openai import OpenAI


2 client = OpenAI()
3
Path parameters 4 client.fine_tuning.jobs.cancel("ftjob-abc123")

fine_tuning_job_id string Required


Response Copy
The ID of the fine-tuning job to cancel.

1 {
2 "object": "fine_tuning.job",
Returns 3 "id": "ftjob-abc123",
4 "model": "gpt-3.5-turbo-0613",

The cancelled fine-tuning object. 5 "created_at": 1689376978,


6 "fine_tuned_model": null,
7 "organization_id": "org-123",
8 "result_files": [],
9 "hyperparameters": {
10 "n_epochs": "auto"
11 },
12 "status": "cancelled",
13 "validation_file": "file-abc123",
14 "training_file": "file-abc123"
15 }

The fine-tuning job object


The fine_tuning.job object represents a fine-tuning job that has been created The fine-tuning job object Copy
through the API.
1 {
2 "object": "fine_tuning.job",
id string
3 "id": "ftjob-abc123",
The object identifier, which can be referenced in the API endpoints.
4 "model": "davinci-002",
5 "created_at": 1692661014,
created_at integer 6 "finished_at": 1692661190,
The Unix timestamp (in seconds) for when the fine-tuning job was created. 7 "fine_tuned_model": "ft:davinci-002:my-org:custom_
8 "organization_id": "org-123",
error object or null 9 "result_files": [
For fine-tuning jobs that have failed , this will contain more information on the cause of the 10 "file-abc123"
failure. 11 ],
Show properties 12 "status": "succeeded",
13 "validation_file": null,
14 "training_file": "file-abc123",
fine_tuned_model string or null 15 "hyperparameters": {
The name of the fine-tuned model that is being created. The value will be null if the fine-tuning job 16 "n_epochs": 4,
is still running. 17 },
18 "trained_tokens": 5768

finished_at integer or null 19 }

The Unix timestamp (in seconds) for when the fine-tuning job was finished. The value will be null if
the fine-tuning job is still running.

hyperparameters object
The hyperparameters used for the fine-tuning job. See the fine-tuning guide for more details.
Show properties

model string
The base model that is being fine-tuned.

object string
The object type, which is always "fine_tuning.job".

organization_id string
The organization that owns the fine-tuning job.

result_files array
The compiled results file ID(s) for the fine-tuning job. You can retrieve the results with the Files
API.

status string
The current status of the fine-tuning job, which can be either validating_files , queued ,
running , succeeded , failed , or cancelled .

trained_tokens integer or null


The total number of billable tokens processed by this fine-tuning job. The value will be null if the
fine-tuning job is still running.

training_file string
The file ID used for training. You can retrieve the training data with the Files API.

validation_file string or null


The file ID used for validation. You can retrieve the validation results with the Files API.

The fine-tuning job event object


Fine-tuning job event object The fine-tuning job event object Copy

id string 1 {
2 "object": "fine_tuning.job.event",
3 "id": "ftevent-abc123"
created_at integer 4 "created_at": 1677610602,
5 "level": "info",
level string 6 "message": "Created fine-tuning job"
7 }

message string

object string

Files
Files are used to upload documents that can be used with features like Assistants and Fine-tuning.

Upload file
POST https://api.openai.com/v1/files Example request python Copy

Upload a file that can be used across various endpoints. The size of all the files 1 from openai import OpenAI
uploaded by one organization can be up to 100 GB. 2 client = OpenAI()
3
The size of individual files can be a maximum of 512 MB or 2 million tokens for 4 client.files.create(
5 file=open("mydata.jsonl", "rb"),
Assistants. See the Assistants Tools guide to learn more about the types of files
6 purpose="fine-tune"
supported. The Fine-tuning API only supports .jsonl files.
7 )

Please contact us if you need to increase these storage limits.


Response Copy

Request body 1 {
2 "id": "file-abc123",
file file Required 3 "object": "file",
The File object (not file name) to be uploaded. 4 "bytes": 120000,
5 "created_at": 1677610602,
6 "filename": "mydata.jsonl",
purpose string Required
7 "purpose": "fine-tune",
The intended purpose of the uploaded file.
8 }
Use "fine-tune" for Fine-tuning and "assistants" for Assistants and Messages. This allows us to
validate the format of the uploaded file is correct for fine-tuning.
Returns

The uploaded File object.

List files
GET https://api.openai.com/v1/files Example request python Copy

Returns a list of files that belong to the user's organization. 1 from openai import OpenAI
2 client = OpenAI()
3
Query parameters 4 client.files.list()

purpose string Optional


Response Copy
Only return files with the given purpose.

1 {
2 "data": [
Returns 3 {
4 "id": "file-abc123",

A list of File objects. 5 "object": "file",


6 "bytes": 175,
7 "created_at": 1613677385,
8 "filename": "salesOverview.pdf",
9 "purpose": "assistants",
10 },
11 {
12 "id": "file-abc123",
13 "object": "file",
14 "bytes": 140,
15 "created_at": 1613779121,
16 "filename": "puppy.jsonl",
17 "purpose": "fine-tune",
18 }
19 ],
20 "object": "list"
21 }

Retrieve file
GET https://api.openai.com/v1/files/{file_id} Example request python Copy

Returns information about a specific file. 1 from openai import OpenAI


2 client = OpenAI()
3
Path parameters 4 client.files.retrieve("file-abc123")

file_id string Required


Response Copy
The ID of the file to use for this request.

1 {
2 "id": "file-abc123",
Returns 3 "object": "file",
4 "bytes": 120000,

The File object matching the specified ID. 5 "created_at": 1677610602,


6 "filename": "mydata.jsonl",
7 "purpose": "fine-tune",
8 }

Delete file
DELETE https://api.openai.com/v1/files/{file_id} Example request python Copy

Delete a file. 1 from openai import OpenAI


2 client = OpenAI()
3
Path parameters 4 client.files.delete("file-abc123")

file_id string Required


Response Copy
The ID of the file to use for this request.

1 {
2 "id": "file-abc123",
Returns 3 "object": "file",
4 "deleted": true
Deletion status. 5 }

Retrieve file content


GET https://api.openai.com/v1/files/{file_id}/content Example request python Copy

Returns the contents of the specified file. 1 from openai import OpenAI
2 client = OpenAI()
3
Path parameters 4 content = client.files.retrieve_content("file-abc123"

file_id string Required


The ID of the file to use for this request.

Returns
The file content.

The file object


The File object represents a document that has been uploaded to OpenAI. The file object Copy

id string 1 {

The file identifier, which can be referenced in the API endpoints. 2 "id": "file-abc123",
3 "object": "file",
4 "bytes": 120000,
bytes integer 5 "created_at": 1677610602,
The size of the file, in bytes. 6 "filename": "salesOverview.pdf",
7 "purpose": "assistants",

created_at integer 8 }

The Unix timestamp (in seconds) for when the file was created.

filename string
The name of the file.

object string
The object type, which is always file .

purpose string
The intended purpose of the file. Supported values are fine-tune , fine-tune-results ,
assistants , and assistants_output .

status Deprecated string


Deprecated. The current status of the file, which can be either uploaded , processed , or
error .

status_details Deprecated string


Deprecated. For details on why a fine-tuning training file failed validation, see the error field on
fine_tuning.job .

Images
Given a prompt and/or an input image, the model will generate a new image.

Related guide: Image generation

Create image
POST https://api.openai.com/v1/images/generations Example request python Copy

Creates an image given a prompt. 1 from openai import OpenAI


2 client = OpenAI()
3
Request body 4 client.images.generate(
5 model="dall-e-3",
prompt string Required 6 prompt="A cute baby sea otter",
A text description of the desired image(s). The maximum length is 1000 characters for dall-e- 7 n=1,

2 and 4000 characters for dall-e-3 . 8 size="1024x1024"


9 )

model string Optional Defaults to dall-e-2


The model to use for image generation. Response Copy

1 {
n integer or null Optional Defaults to 1
2 "created": 1589478378,
The number of images to generate. Must be between 1 and 10. For dall-e-3 , only n=1 is
3 "data": [
supported.
4 {
5 "url": "https://..."
quality string Optional Defaults to standard 6 },
The quality of the image that will be generated. hd creates images with finer details and greater 7 {
consistency across the image. This param is only supported for dall-e-3 . 8 "url": "https://..."
9 }
10 ]
response_format string or null Optional Defaults to url
11 }
The format in which the generated images are returned. Must be one of url or b64_json .

size string or null Optional Defaults to 1024x1024


The size of the generated images. Must be one of 256x256 , 512x512 , or 1024x1024 for
dall-e-2 . Must be one of 1024x1024 , 1792x1024 , or 1024x1792 for dall-e-3
models.

style string or null Optional Defaults to vivid


The style of the generated images. Must be one of vivid or natural . Vivid causes the model
to lean towards generating hyper-real and dramatic images. Natural causes the model to produce
more natural, less hyper-real looking images. This param is only supported for dall-e-3 .

user string Optional


A unique identifier representing your end-user, which can help OpenAI to monitor and detect
abuse. Learn more.

Returns

Returns a list of image objects.


Create image edit
POST https://api.openai.com/v1/images/edits Example request python Copy

Creates an edited or extended image given an original image and a prompt. 1 from openai import OpenAI
2 client = OpenAI()
3
Request body 4 client.images.edit(
5 image=open("otter.png", "rb"),
image file Required 6 mask=open("mask.png", "rb"),
The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, 7 prompt="A cute baby sea otter wearing a beret",
image must have transparency, which will be used as the mask. 8 n=2,
9 size="1024x1024"
10 )
prompt string Required
A text description of the desired image(s). The maximum length is 1000 characters.
Response Copy

mask file Optional


1 {
An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where
2 "created": 1589478378,
image should be edited. Must be a valid PNG file, less than 4MB, and have the same dimensions
3 "data": [
as image .
4 {
5 "url": "https://..."
model string Optional Defaults to dall-e-2 6 },
The model to use for image generation. Only dall-e-2 is supported at this time. 7 {
8 "url": "https://..."
9 }
n integer or null Optional Defaults to 1
10 ]
The number of images to generate. Must be between 1 and 10.
11 }

size string or null Optional Defaults to 1024x1024


The size of the generated images. Must be one of 256x256 , 512x512 , or 1024x1024 .

response_format string or null Optional Defaults to url


The format in which the generated images are returned. Must be one of url or b64_json .

user string Optional


A unique identifier representing your end-user, which can help OpenAI to monitor and detect
abuse. Learn more.

Returns

Returns a list of image objects.

Create image variation


POST https://api.openai.com/v1/images/variations Example request python Copy

Creates a variation of a given image. 1 from openai import OpenAI


2 client = OpenAI()
3
Request body 4 response = client.images.create_variation(
5 image=open("image_edit_original.png", "rb"),
image file Required 6 n=2,
The image to use as the basis for the variation(s). Must be a valid PNG file, less than 4MB, and 7 size="1024x1024"
square. 8 )

model string Optional Defaults to dall-e-2 Response Copy


The model to use for image generation. Only dall-e-2 is supported at this time.
1 {
2 "created": 1589478378,
n integer or null Optional Defaults to 1
3 "data": [
The number of images to generate. Must be between 1 and 10. For dall-e-3 , only n=1 is
4 {
supported.
5 "url": "https://..."
6 },
response_format string or null Optional Defaults to url 7 {
The format in which the generated images are returned. Must be one of url or b64_json . 8 "url": "https://..."
9 }
10 ]
size string or null Optional Defaults to 1024x1024
11 }
The size of the generated images. Must be one of 256x256 , 512x512 , or 1024x1024 .

user string Optional


A unique identifier representing your end-user, which can help OpenAI to monitor and detect
abuse. Learn more.

Returns

Returns a list of image objects.

The image object


Represents the url or the content of an image generated by the OpenAI API. The image object Copy

b64_json string 1 {

The base64-encoded JSON of the generated image, if response_format is b64_json . 2 "url": "...",
3 "revised_prompt": "..."
4 }
url string
The URL of the generated image, if response_format is url (default).

revised_prompt string
The prompt that was used to generate the image, if there was any revision to the prompt.

Models
List and describe the various models available in the API. You can refer to the Models documentation to
understand what models are available and the differences between them.

List models
GET https://api.openai.com/v1/models Example request python Copy

Lists the currently available models, and provides basic information about each one 1 from openai import OpenAI
such as the owner and availability. 2 client = OpenAI()
3
4 client.models.list()
Returns
Response Copy
A list of model objects.
1 {
2 "object": "list",
3 "data": [
4 {
5 "id": "model-id-0",
6 "object": "model",
7 "created": 1686935002,
8 "owned_by": "organization-owner"
9 },
10 {
11 "id": "model-id-1",
12 "object": "model",
13 "created": 1686935002,
14 "owned_by": "organization-owner",
15 },
16 {
17 "id": "model-id-2",
18 "object": "model",
19 "created": 1686935002,
20 "owned_by": "openai"
21 },
22 ],
23 "object": "list"
24 }

Retrieve model
GET https://api.openai.com/v1/models/{model} Example request gpt-3.5-turbo-instruct python Copy

Retrieves a model instance, providing basic information about the model such as the 1 from openai import OpenAI
owner and permissioning. 2 client = OpenAI()
3
4 client.models.retrieve("gpt-3.5-turbo-instruct")
Path parameters
Response gpt-3.5-turbo-instruct Copy
model string Required
The ID of the model to use for this request
1 {
2 "id": "gpt-3.5-turbo-instruct",
3 "object": "model",
Returns 4 "created": 1686935002,
5 "owned_by": "openai"
6 }
The model object matching the specified ID.

Delete a fine-tuned model


DELETE https://api.openai.com/v1/models/{model} Example request python Copy

Delete a fine-tuned model. You must have the Owner role in your organization to 1 from openai import OpenAI
delete a model. 2 client = OpenAI()
3
4 client.models.delete("ft:gpt-3.5-turbo:acemeco:suffix
Path parameters

model string Required Response Copy


The model to delete
1 {
2 "id": "ft:gpt-3.5-turbo:acemeco:suffix:abc123",
3 "object": "model",
Returns
4 "deleted": true
5 }
Deletion status.

The model object


Describes an OpenAI model offering that can be used with the API. The model object Copy

id string 1 {
2 "id": "davinci",
The model identifier, which can be referenced in the API endpoints. 3 "object": "model",
4 "created": 1686935002,
5 "owned_by": "openai"
created integer
6 }
The Unix timestamp (in seconds) when the model was created.

object string
The object type, which is always "model".

owned_by string
The organization that owns the model.

Moderations
Given a input text, outputs if the model classifies it as violating OpenAI's content policy.

Related guide: Moderations

Create moderation
POST https://api.openai.com/v1/moderations Example request python Copy

Classifies if text violates OpenAI's Content Policy 1 from openai import OpenAI
2 client = OpenAI()
3
Request body 4 client.moderations.create(input="I want to kill them.

input string or array Required


The input text to classify
Response Copy

model string Optional Defaults to text-moderation-latest 1 {


Two content moderations models are available: text-moderation-stable and text- 2 "id": "modr-XXXXX",
moderation-latest . 3 "model": "text-moderation-005",
4 "results": [
The default is text-moderation-latest which will be automatically upgraded over time. This
5 {
ensures you are always using our most accurate model. If you use text-moderation-stable ,
6 "flagged": true,
we will provide advanced notice before updating the model. Accuracy of text-moderation-
7 "categories": {
stable may be slightly lower than for text-moderation-latest . 8 "sexual": false,
9 "hate": false,
10 "harassment": false,

Returns 11 "self-harm": false,


12 "sexual/minors": false,
13 "hate/threatening": false,
A moderation object.
14 "violence/graphic": false,
15 "self-harm/intent": false,
16 "self-harm/instructions": false,
17 "harassment/threatening": true,
18 "violence": true,
19 },
20 "category_scores": {
21 "sexual": 1.2282071e-06,
22 "hate": 0.010696256,
23 "harassment": 0.29842457,
24 "self-harm": 1.5236925e-08,
25 "sexual/minors": 5.7246268e-08,
26 "hate/threatening": 0.0060676364,
27 "violence/graphic": 4.435014e-06,
28 "self-harm/intent": 8.098441e-10,
29 "self-harm/instructions": 2.8498655e-11,
30 "harassment/threatening": 0.63055265,
31 "violence": 0.99011886,
32 }
33 }
34 ]
35 }

The moderation object


Represents policy compliance report by OpenAI's content moderation model against The moderation object Copy
a given input.
1 {
2 "id": "modr-XXXXX",
id string
3 "model": "text-moderation-005",
The unique identifier for the moderation request.
4 "results": [
5 {
model string 6 "flagged": true,
The model used to generate the moderation results. 7 "categories": {
8 "sexual": false,
9 "hate": false,
results array
10 "harassment": false,
A list of moderation objects.
11 "self-harm": false,
Show properties 12 "sexual/minors": false,
13 "hate/threatening": false,
14 "violence/graphic": false,
15 "self-harm/intent": false,
16 "self-harm/instructions": false,
17 "harassment/threatening": true,
18 "violence": true,
19 },
20 "category_scores": {
21 "sexual": 1.2282071e-06,
22 "hate": 0.010696256,
23 "harassment": 0.29842457,
24 "self-harm": 1.5236925e-08,
25 "sexual/minors": 5.7246268e-08,
26 "hate/threatening": 0.0060676364,
27 "violence/graphic": 4.435014e-06,
28 "self-harm/intent": 8.098441e-10,
29 "self-harm/instructions": 2.8498655e-11,
30 "harassment/threatening": 0.63055265,
31 "violence": 0.99011886,
32 }
33 }
34 ]
35 }

Assistants Beta

Build assistants that can call models and use tools to perform tasks.

Get started with the Assistants API

Create assistant Beta


Code Interpreter Files

POST https://api.openai.com/v1/assistants
Example request python Copy
Create an assistant with a model and instructions.
1 from openai import OpenAI
2 client = OpenAI()
Request body 3
4 my_assistant = client.beta.assistants.create(
5 instructions="You are a personal math tutor. Whe
model Required
6 name="Math Tutor",
ID of the model to use. You can use the List models API to see all of your available models, or see
7 tools=[{"type": "code_interpreter"}],
our Model overview for descriptions of them.
8 model="gpt-4",
9 )
name string or null Optional 10 print(my_assistant)
The name of the assistant. The maximum length is 256 characters.

description string or null Optional Response Copy


The description of the assistant. The maximum length is 512 characters.
1 {
2 "id": "asst_abc123",
instructions string or null Optional
3 "object": "assistant",
The system instructions that the assistant uses. The maximum length is 32768 characters.
4 "created_at": 1698984975,
5 "name": "Math Tutor",
tools array Optional Defaults to [] 6 "description": null,
A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools 7 "model": "gpt-4",
can be of types code_interpreter , retrieval , or function . 8 "instructions": "You are a personal math tutor. Wh
Show possible types 9 "tools": [
10 {
11 "type": "code_interpreter"
file_ids array Optional Defaults to [] 12 }
A list of file IDs attached to this assistant. There can be a maximum of 20 files attached to the 13 ],
assistant. Files are ordered by their creation date in ascending order. 14 "file_ids": [],
15 "metadata": {}
16 }
metadata map Optional
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing
additional information about the object in a structured format. Keys can be a maximum of 64
characters long and values can be a maxium of 512 characters long.

Returns

An assistant object.

Create assistant file Beta

POST https://api.openai.com/v1/assistants/{assistant_id}/files Example request python Copy

Create an assistant file by attaching a File to an assistant. 1 from openai import OpenAI
2 client = OpenAI()
3
Path parameters 4 assistant_file = client.beta.assistants.files.create(
5 assistant_id="asst_abc123",
assistant_id string Required 6 file_id="file-abc123"
The ID of the assistant for which to create a File. 7 )
8 print(assistant_file)

Request body
Response Copy

file_id string Required


1 {
A File ID (with purpose="assistants" ) that the assistant should use. Useful for tools like
2 "id": "file-abc123",
retrieval and code_interpreter that can access files.
3 "object": "assistant.file",
4 "created_at": 1699055364,
5 "assistant_id": "asst_abc123"
Returns 6 }

An assistant file object.

List assistants Beta

GET https://api.openai.com/v1/assistants Example request python Copy

Returns a list of assistants. 1 from openai import OpenAI


2 client = OpenAI()
3
Query parameters 4 my_assistants = client.beta.assistants.list(
5 order="desc",
6 limit="20",
limit integer Optional Defaults to 20
7 )
A limit on the number of objects to be returned. Limit can range between 1 and 100, and the
8 print(my_assistants.data)
default is 20.

order string Optional Defaults to desc Response Copy


Sort order by the created_at timestamp of the objects. asc for ascending order and desc
1 {
for descending order.
2 "object": "list",
3 "data": [
after string Optional 4 {
A cursor for use in pagination. after is an object ID that defines your place in the list. For 5 "id": "asst_abc123",
instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent 6 "object": "assistant",
call can include after=obj_foo in order to fetch the next page of the list. 7 "created_at": 1698982736,
8 "name": "Coding Tutor",
9 "description": null,
before string Optional
10 "model": "gpt-4",
A cursor for use in pagination. before is an object ID that defines your place in the list. For
11 "instructions": "You are a helpful assistant d
instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent
12 "tools": [],
call can include before=obj_foo in order to fetch the previous page of the list.
13 "file_ids": [],
14 "metadata": {}
15 },
Returns 16 {
17 "id": "asst_abc456",
18 "object": "assistant",
A list of assistant objects.
19 "created_at": 1698982718,
20 "name": "My Assistant",
21 "description": null,
22 "model": "gpt-4",
23 "instructions": "You are a helpful assistant d
24 "tools": [],
25 "file_ids": [],
26 "metadata": {}
27 },
28 {
29 "id": "asst_abc789",
30 "object": "assistant",
31 "created_at": 1698982643,
32 "name": null,
33 "description": null,
34 "model": "gpt-4",
35 "instructions": null,
36 "tools": [],
37 "file_ids": [],
38 "metadata": {}
39 }
40 ],
41 "first_id": "asst_abc123",
42 "last_id": "asst_abc789",
43 "has_more": false
44 }

List assistant files Beta

GET https://api.openai.com/v1/assistants/{assistant_id}/files Example request python Copy

Returns a list of assistant files. 1 from openai import OpenAI


2 client = OpenAI()
3
Path parameters 4 assistant_files = client.beta.assistants.files.list(
5 assistant_id="asst_abc123"
assistant_id string Required 6 )
The ID of the assistant the file belongs to. 7 print(assistant_files)

Query parameters Response Copy

1 {
limit integer Optional Defaults to 20
2 "object": "list",
A limit on the number of objects to be returned. Limit can range between 1 and 100, and the
3 "data": [
default is 20.
4 {
5 "id": "file-abc123",
order string Optional Defaults to desc 6 "object": "assistant.file",
Sort order by the created_at timestamp of the objects. asc for ascending order and desc 7 "created_at": 1699060412,
for descending order. 8 "assistant_id": "asst_abc123"
9 },
10 {
after string Optional
11 "id": "file-abc456",
A cursor for use in pagination. after is an object ID that defines your place in the list. For
12 "object": "assistant.file",
instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent
13 "created_at": 1699060412,
call can include after=obj_foo in order to fetch the next page of the list. 14 "assistant_id": "asst_abc123"
15 }
before string Optional 16 ],
A cursor for use in pagination. before is an object ID that defines your place in the list. For 17 "first_id": "file-abc123",
instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent 18 "last_id": "file-abc456",
call can include before=obj_foo in order to fetch the previous page of the list. 19 "has_more": false
20 }

Returns

A list of assistant file objects.

Retrieve assistant Beta

GET https://api.openai.com/v1/assistants/{assistant_id} Example request python Copy

Retrieves an assistant.
Path parameters 1 from openai import OpenAI
2 client = OpenAI()
assistant_id string Required 3
The ID of the assistant to retrieve. 4 my_assistant = client.beta.assistants.retrieve("asst_
5 print(my_assistant)

Returns
Response Copy
The assistant object matching the specified ID.
1 {
2 "id": "asst_abc123",
3 "object": "assistant",
4 "created_at": 1699009709,
5 "name": "HR Helper",
6 "description": null,
7 "model": "gpt-4",
8 "instructions": "You are an HR bot, and you have a
9 "tools": [
10 {
11 "type": "retrieval"
12 }
13 ],
14 "file_ids": [
15 "file-abc123"
16 ],
17 "metadata": {}
18 }

Retrieve assistant file Beta

GET https://api.openai.com/v1/assistants/{assistant_id}/files/{file_id} Example request python Copy

Retrieves an AssistantFile. 1 from openai import OpenAI


2 client = OpenAI()
3
Path parameters 4 assistant_file = client.beta.assistants.files.retriev
5 assistant_id="asst_abc123",
assistant_id string Required 6 file_id="file-abc123"
The ID of the assistant who the file belongs to. 7 )
8 print(assistant_file)

file_id string Required


The ID of the file we're getting.
Response Copy

1 {
Returns 2 "id": "file-abc123",
3 "object": "assistant.file",
The assistant file object matching the specified ID. 4 "created_at": 1699055364,
5 "assistant_id": "asst_abc123"
6 }

Modify assistant Beta

POST https://api.openai.com/v1/assistants/{assistant_id} Example request python Copy

Modifies an assistant. 1 from openai import OpenAI


2 client = OpenAI()
3
Path parameters 4 my_updated_assistant = client.beta.assistants.update
5 "asst_abc123",
assistant_id string Required 6 instructions="You are an HR bot, and you have acce
The ID of the assistant to modify. 7 name="HR Helper",
8 tools=[{"type": "retrieval"}],
9 model="gpt-4",
10 file_ids=["file-abc123", "file-abc456"],
Request body 11 )
12
model Optional 13 print(my_updated_assistant)
ID of the model to use. You can use the List models API to see all of your available models, or see
our Model overview for descriptions of them.

Response Copy
name string or null Optional
The name of the assistant. The maximum length is 256 characters. 1 {
2 "id": "asst_abc123",
3 "object": "assistant",
description string or null Optional
4 "created_at": 1699009709,
The description of the assistant. The maximum length is 512 characters.
5 "name": "HR Helper",
6 "description": null,
instructions string or null Optional 7 "model": "gpt-4",
The system instructions that the assistant uses. The maximum length is 32768 characters. 8 "instructions": "You are an HR bot, and you have a
9 "tools": [
10 {
tools array Optional Defaults to []
11 "type": "retrieval"
A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools
12 }
can be of types code_interpreter , retrieval , or function .
13 ],
Show possible types 14 "file_ids": [
15 "file-abc123",
16 "file-abc456"
file_ids array Optional Defaults to []
17 ],
A list of File IDs attached to this assistant. There can be a maximum of 20 files attached to the
18 "metadata": {}
assistant. Files are ordered by their creation date in ascending order. If a file was previously
19 }
attached to the list but does not show up in the list, it will be deleted from the assistant.

metadata map Optional


Set of 16 key-value pairs that can be attached to an object. This can be useful for storing
additional information about the object in a structured format. Keys can be a maximum of 64
characters long and values can be a maxium of 512 characters long.
Returns

The modified assistant object.

Delete assistant Beta

DELETE https://api.openai.com/v1/assistants/{assistant_id} Example request python Copy

Delete an assistant. 1 from openai import OpenAI


2 client = OpenAI()
3
Path parameters 4 response = client.beta.assistants.delete("asst_abc123
5 print(response)
assistant_id string Required
The ID of the assistant to delete.

Response Copy

Returns 1 {
2 "id": "asst_abc123",
3 "object": "assistant.deleted",
Deletion status
4 "deleted": true
5 }

Delete assistant file Beta

DELETE https://api.openai.com/v1/assistants/{assistant_id}/files/{file_id} Example request python Copy

Delete an assistant file. 1 from openai import OpenAI


2 client = OpenAI()
3
Path parameters 4 deleted_assistant_file = client.beta.assistants.files
5 assistant_id="asst_abc123",
assistant_id string Required 6 file_id="file-abc123"
The ID of the assistant that the file belongs to. 7 )
8 print(deleted_assistant_file)

file_id string Required


The ID of the file to delete.
Response Copy

1 {
Returns 2 id: "file-abc123",
3 object: "assistant.file.deleted",
Deletion status 4 deleted: true
5 }

The assistant object Beta

Represents an assistant that can call the model and use tools. The assistant object Copy

id string 1 {
The identifier, which can be referenced in API endpoints. 2 "id": "asst_abc123",
3 "object": "assistant",
4 "created_at": 1698984975,
object string 5 "name": "Math Tutor",
The object type, which is always assistant . 6 "description": null,
7 "model": "gpt-4",

created_at integer 8 "instructions": "You are a personal math tutor. Wh


The Unix timestamp (in seconds) for when the assistant was created. 9 "tools": [
10 {
11 "type": "code_interpreter"
name string or null 12 }
The name of the assistant. The maximum length is 256 characters. 13 ],
14 "file_ids": [],
description string or null 15 "metadata": {}
The description of the assistant. The maximum length is 512 characters. 16 }

model string
ID of the model to use. You can use the List models API to see all of your available models, or see
our Model overview for descriptions of them.

instructions string or null


The system instructions that the assistant uses. The maximum length is 32768 characters.

tools array
A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools
can be of types code_interpreter , retrieval , or function .
Show possible types

file_ids array
A list of file IDs attached to this assistant. There can be a maximum of 20 files attached to the
assistant. Files are ordered by their creation date in ascending order.

metadata map
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing
additional information about the object in a structured format. Keys can be a maximum of 64
characters long and values can be a maxium of 512 characters long.
The assistant file object Beta

A list of Files attached to an assistant . The assistant file object Copy

id string 1 {

The identifier, which can be referenced in API endpoints. 2 "id": "file-abc123",


3 "object": "assistant.file",
4 "created_at": 1699055364,
object string 5 "assistant_id": "asst_abc123"
The object type, which is always assistant.file . 6 }

created_at integer
The Unix timestamp (in seconds) for when the assistant file was created.

assistant_id string
The assistant ID that the file is attached to.

Threads Beta

Create threads that assistants can interact with.

Related guide: Assistants

Create thread Beta


Empty Messages

POST https://api.openai.com/v1/threads
Example request python Copy
Create a thread.
1 from openai import OpenAI
2 client = OpenAI()
Request body 3
4 empty_thread = client.beta.threads.create()
5 print(empty_thread)
messages array Optional
A list of messages to start the thread with.
Show properties Response Copy

1 {
metadata map Optional
2 "id": "thread_abc123",
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing
3 "object": "thread",
additional information about the object in a structured format. Keys can be a maximum of 64
4 "created_at": 1699012949,
characters long and values can be a maxium of 512 characters long.
5 "metadata": {}
6 }

Returns

A thread object.

Retrieve thread Beta

GET https://api.openai.com/v1/threads/{thread_id} Example request python Copy

Retrieves a thread. 1 from openai import OpenAI


2 client = OpenAI()
3
Path parameters 4 my_thread = client.beta.threads.retrieve("thread_abc1
5 print(my_thread)
thread_id string Required
The ID of the thread to retrieve.

Response Copy

Returns 1 {
2 "id": "thread_abc123",
3 "object": "thread",
The thread object matching the specified ID.
4 "created_at": 1699014083,
5 "metadata": {}
6 }

Modify thread Beta

POST https://api.openai.com/v1/threads/{thread_id} Example request python Copy

Modifies a thread. 1 from openai import OpenAI


2 client = OpenAI()
3
Path parameters 4 my_updated_thread = client.beta.threads.update(
5 "thread_abc123",
thread_id string Required 6 metadata={
The ID of the thread to modify. Only the metadata can be modified. 7 "modified": "true",
8 "user": "abc123"
9 }
10 )
Request body 11 print(my_updated_thread)

metadata map Optional


Set of 16 key-value pairs that can be attached to an object. This can be useful for storing Response Copy
additional information about the object in a structured format. Keys can be a maximum of 64
1 {
characters long and values can be a maxium of 512 characters long.
2 "id": "thread_abc123",
Returns 3 "object": "thread",
4 "created_at": 1699014083,
5 "metadata": {
The modified thread object matching the specified ID.
6 "modified": "true",
7 "user": "abc123"
8 }
9 }

Delete thread Beta

DELETE https://api.openai.com/v1/threads/{thread_id} Example request python Copy

Delete a thread. 1 from openai import OpenAI


2 client = OpenAI()
3
Path parameters 4 response = client.beta.threads.delete("thread_abc123"
5 print(response)
thread_id string Required
The ID of the thread to delete.

Response Copy

Returns 1 {
2 "id": "thread_abc123",
3 "object": "thread.deleted",
Deletion status
4 "deleted": true
5 }

The thread object Beta

Represents a thread that contains messages. The thread object Copy

id string 1 {
The identifier, which can be referenced in API endpoints. 2 "id": "thread_abc123",
3 "object": "thread",
4 "created_at": 1698107661,
object string 5 "metadata": {}
The object type, which is always thread . 6 }

created_at integer
The Unix timestamp (in seconds) for when the thread was created.

metadata map
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing
additional information about the object in a structured format. Keys can be a maximum of 64
characters long and values can be a maxium of 512 characters long.

Messages Beta

Create messages within threads

Related guide: Assistants

Create message Beta

POST https://api.openai.com/v1/threads/{thread_id}/messages Example request python Copy

Create a message. 1 from openai import OpenAI


2 client = OpenAI()
3
Path parameters 4 thread_message = client.beta.threads.messages.create(
5 "thread_abc123",
thread_id string Required 6 role="user",
The ID of the thread to create a message for. 7 content="How does AI work? Explain it in simple ter
8 )
9 print(thread_message)

Request body
Response Copy
role string Required
The role of the entity that is creating the message. Currently only user is supported.
1 {
2 "id": "msg_abc123",
content string Required 3 "object": "thread.message",
The content of the message. 4 "created_at": 1699017614,
5 "thread_id": "thread_abc123",
6 "role": "user",
file_ids array Optional Defaults to []
7 "content": [
A list of File IDs that the message should use. There can be a maximum of 10 files attached to a
8 {
message. Useful for tools like retrieval and code_interpreter that can access and use
9 "type": "text",
files.
10 "text": {
11 "value": "How does AI work? Explain it in si
metadata map Optional 12 "annotations": []
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing 13 }
additional information about the object in a structured format. Keys can be a maximum of 64 14 }
characters long and values can be a maxium of 512 characters long. 15 ],
16 "file_ids": [],
17 "assistant_id": null,
18 "run_id": null,
Returns 19 "metadata": {}
20 }
A message object.
List messages Beta

GET https://api.openai.com/v1/threads/{thread_id}/messages Example request python Copy

Returns a list of messages for a given thread. 1 from openai import OpenAI
2 client = OpenAI()
3
Path parameters 4 thread_messages = client.beta.threads.messages.list("
5 print(thread_messages.data)
thread_id string Required
The ID of the thread the messages belong to.

Response Copy

Query parameters 1 {
2 "object": "list",
3 "data": [
limit integer Optional Defaults to 20
4 {
A limit on the number of objects to be returned. Limit can range between 1 and 100, and the
5 "id": "msg_abc123",
default is 20.
6 "object": "thread.message",
7 "created_at": 1699016383,
order string Optional Defaults to desc 8 "thread_id": "thread_abc123",
Sort order by the created_at timestamp of the objects. asc for ascending order and desc 9 "role": "user",
for descending order. 10 "content": [
11 {
12 "type": "text",
after string Optional
13 "text": {
A cursor for use in pagination. after is an object ID that defines your place in the list. For
14 "value": "How does AI work? Explain it i
instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent
15 "annotations": []
call can include after=obj_foo in order to fetch the next page of the list. 16 }
17 }
before string Optional 18 ],
A cursor for use in pagination. before is an object ID that defines your place in the list. For 19 "file_ids": [],
instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent 20 "assistant_id": null,
call can include before=obj_foo in order to fetch the previous page of the list. 21 "run_id": null,
22 "metadata": {}
23 },
24 {
Returns 25 "id": "msg_abc456",
26 "object": "thread.message",
A list of message objects. 27 "created_at": 1699016383,
28 "thread_id": "thread_abc123",
29 "role": "user",
30 "content": [
31 {
32 "type": "text",
33 "text": {
34 "value": "Hello, what is AI?",
35 "annotations": []
36 }
37 }
38 ],
39 "file_ids": [
40 "file-abc123"
41 ],
42 "assistant_id": null,
43 "run_id": null,
44 "metadata": {}
45 }
46 ],
47 "first_id": "msg_abc123",
48 "last_id": "msg_abc456",
49 "has_more": false
50 }

List message files Beta

GET https://api.openai.com/v1/threads/{thread_id}/messages/{message_id}/fil Example request python Copy


es
1 from openai import OpenAI
Returns a list of message files. 2 client = OpenAI()
3
4 message_files = client.beta.threads.messages.files.li
Path parameters 5 thread_id="thread_abc123",
6 message_id="msg_abc123"
7 )
thread_id string Required
8 print(message_files)
The ID of the thread that the message and files belong to.

message_id string Required


The ID of the message that the files belongs to. Response Copy

1 {
2 "object": "list",
Query parameters 3 "data": [
4 {
limit integer Optional Defaults to 20 5 "id": "file-abc123",
A limit on the number of objects to be returned. Limit can range between 1 and 100, and the 6 "object": "thread.message.file",
default is 20. 7 "created_at": 1699061776,
8 "message_id": "msg_abc123"
9 },
order string Optional Defaults to desc 10 {
Sort order by the created_at timestamp of the objects. asc for ascending order and desc 11 "id": "file-abc123",
for descending order. 12 "object": "thread.message.file",
13 "created_at": 1699061776,
after string Optional 14 "message_id": "msg_abc123"
A cursor for use in pagination. after is an object ID that defines your place in the list. For 15 }

instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent 16 ],
17 "first_id": "file-abc123",
call can include after=obj_foo in order to fetch the next page of the list.
18 "last_id": "file-abc123",
19 "has_more": false
before string Optional 20 }
A cursor for use in pagination. before is an object ID that defines your place in the list. For
instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent
call can include before=obj_foo in order to fetch the previous page of the list.

Returns

A list of message file objects.

Retrieve message Beta

GET https://api.openai.com/v1/threads/{thread_id}/messages/{message_id} Example request python Copy

Retrieve a message. 1 from openai import OpenAI


2 client = OpenAI()
3
Path parameters 4 message = client.beta.threads.messages.retrieve(
5 message_id="msg_abc123",
thread_id string Required 6 thread_id="thread_abc123",
The ID of the thread to which this message belongs. 7 )
8 print(message)

message_id string Required


The ID of the message to retrieve. Response Copy

1 {
2 "id": "msg_abc123",
Returns 3 "object": "thread.message",
4 "created_at": 1699017614,
The message object matching the specified ID. 5 "thread_id": "thread_abc123",
6 "role": "user",
7 "content": [
8 {
9 "type": "text",
10 "text": {
11 "value": "How does AI work? Explain it in si
12 "annotations": []
13 }
14 }
15 ],
16 "file_ids": [],
17 "assistant_id": null,
18 "run_id": null,
19 "metadata": {}
20 }

Retrieve message file Beta

GET https://api.openai.com/v1/threads/{thread_id}/messages/{message_id}/fil Example request python Copy


es/{file_id}
1 from openai import OpenAI
Retrieves a message file. 2 client = OpenAI()
3
4 message_files = client.beta.threads.messages.files.re
Path parameters 5 thread_id="thread_abc123",
6 message_id="msg_abc123",
7 file_id="file-abc123"
thread_id string Required
8 )
The ID of the thread to which the message and File belong.
9 print(message_files)

message_id string Required


The ID of the message the file belongs to.
Response Copy

file_id string Required 1 {


The ID of the file being retrieved. 2 "id": "file-abc123",
3 "object": "thread.message.file",
4 "created_at": 1699061776,
5 "message_id": "msg_abc123"
Returns
6 }

The message file object.

Modify message Beta

POST https://api.openai.com/v1/threads/{thread_id}/messages/{message_id} Example request python Copy

Modifies a message. 1 from openai import OpenAI


2 client = OpenAI()
3
Path parameters 4 message = client.beta.threads.messages.update(
5 message_id="msg_abc12",
thread_id string Required 6 thread_id="thread_abc123",
The ID of the thread to which this message belongs. 7 metadata={
8 "modified": "true",
9 "user": "abc123",
message_id string Required 10 },
The ID of the message to modify. 11 )
12 print(message)

Request body Copy


Response

metadata map Optional 1 {


Set of 16 key-value pairs that can be attached to an object. This can be useful for storing 2 "id": "msg_abc123",
additional information about the object in a structured format. Keys can be a maximum of 64 3 "object": "thread.message",
characters long and values can be a maxium of 512 characters long. 4 "created_at": 1699017614,
5 "thread_id": "thread_abc123",
Returns 6 "role": "user",
7 "content": [
8 {
The modified message object.
9 "type": "text",
10 "text": {
11 "value": "How does AI work? Explain it in si
12 "annotations": []
13 }
14 }
15 ],
16 "file_ids": [],
17 "assistant_id": null,
18 "run_id": null,
19 "metadata": {
20 "modified": "true",
21 "user": "abc123"
22 }
23 }

The message object Beta

Represents a message within a thread. The message object Copy

id string 1 {

The identifier, which can be referenced in API endpoints. 2 "id": "msg_abc123",


3 "object": "thread.message",
4 "created_at": 1698983503,
object string 5 "thread_id": "thread_abc123",
The object type, which is always thread.message . 6 "role": "assistant",
7 "content": [

created_at integer 8 {

The Unix timestamp (in seconds) for when the message was created. 9 "type": "text",
10 "text": {
11 "value": "Hi! How can I help you today?",
thread_id string 12 "annotations": []
The thread ID that this message belongs to. 13 }
14 }
role string 15 ],
The entity that produced the message. One of user or assistant . 16 "file_ids": [],
17 "assistant_id": "asst_abc123",
18 "run_id": "run_abc123",
content array 19 "metadata": {}
The content of the message in array of text and/or images. 20 }
Show possible types

assistant_id string or null


If applicable, the ID of the assistant that authored this message.

run_id string or null


If applicable, the ID of the run associated with the authoring of this message.

file_ids array
A list of file IDs that the assistant should use. Useful for tools like retrieval and code_interpreter
that can access files. A maximum of 10 files can be attached to a message.

metadata map
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing
additional information about the object in a structured format. Keys can be a maximum of 64
characters long and values can be a maxium of 512 characters long.

The message file object Beta

A list of files attached to a message . The message file object Copy

id string 1 {
The identifier, which can be referenced in API endpoints. 2 "id": "file-abc123",
3 "object": "thread.message.file",
4 "created_at": 1698107661,
object string 5 "message_id": "message_QLoItBbqwyAJEzlTy4y9kOMM",
The object type, which is always thread.message.file . 6 "file_id": "file-abc123"
7 }
created_at integer
The Unix timestamp (in seconds) for when the message file was created.

message_id string
The ID of the message that the File is attached to.

Runs Beta

Represents an execution run on a thread.

Related guide: Assistants

Create run Beta

POST https://api.openai.com/v1/threads/{thread_id}/runs Example request python Copy

Create a run. 1 from openai import OpenAI


2 client = OpenAI()
3
4 run = client.beta.threads.runs.create(
Path parameters 5 thread_id="thread_abc123",
6 assistant_id="asst_abc123"
7 )
thread_id string Required
8 print(run)
The ID of the thread to run.

Response Copy
Request body
1 {
2 "id": "run_abc123",
assistant_id string Required 3 "object": "thread.run",
The ID of the assistant to use to execute this run.
4 "created_at": 1699063290,
5 "assistant_id": "asst_abc123",
model string or null Optional 6 "thread_id": "thread_abc123",
The ID of the Model to be used to execute this run. If a value is provided here, it will override the 7 "status": "queued",
model associated with the assistant. If not, the model associated with the assistant will be used. 8 "started_at": 1699063290,
9 "expires_at": null,
10 "cancelled_at": null,
instructions string or null Optional 11 "failed_at": null,
Overrides the instructions of the assistant. This is useful for modifying the behavior on a per-run 12 "completed_at": 1699063291,
basis. 13 "last_error": null,
14 "model": "gpt-4",
additional_instructions string or null Optional 15 "instructions": null,
Appends additional instructions at the end of the instructions for the run. This is useful for 16 "tools": [

modifying the behavior on a per-run basis without overriding other instructions. 17 {


18 "type": "code_interpreter"
19 }
tools array or null Optional 20 ],
Override the tools the assistant can use for this run. This is useful for modifying the behavior on a 21 "file_ids": [
per-run basis. 22 "file-abc123",
Show possible types 23 "file-abc456"
24 ],
25 "metadata": {},
metadata map Optional 26 "usage": null
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing 27 }
additional information about the object in a structured format. Keys can be a maximum of 64
characters long and values can be a maxium of 512 characters long.

Returns

A run object.

Create thread and run Beta

POST https://api.openai.com/v1/threads/runs Example request python Copy

Create a thread and run it in one request. 1 from openai import OpenAI
2 client = OpenAI()
3
Request body 4 run = client.beta.threads.create_and_run(
5 assistant_id="asst_abc123",
assistant_id string Required 6 thread={
The ID of the assistant to use to execute this run. 7 "messages": [
8 {"role": "user", "content": "Explain deep lear
9 ]
thread object Optional 10 }
Show properties 11 )

model string or null Optional


The ID of the Model to be used to execute this run. If a value is provided here, it will override the Response Copy
model associated with the assistant. If not, the model associated with the assistant will be used.
1 {
2 "id": "run_abc123",
instructions string or null Optional
3 "object": "thread.run",
Override the default system message of the assistant. This is useful for modifying the behavior on
4 "created_at": 1699076792,
a per-run basis.
5 "assistant_id": "asst_abc123",
6 "thread_id": "thread_abc123",
tools array or null Optional 7 "status": "queued",
Override the tools the assistant can use for this run. This is useful for modifying the behavior on a 8 "started_at": null,
per-run basis. 9 "expires_at": 1699077392,
10 "cancelled_at": null,
11 "failed_at": null,
metadata map Optional
12 "completed_at": null,
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing
13 "last_error": null,
additional information about the object in a structured format. Keys can be a maximum of 64
14 "model": "gpt-4",
characters long and values can be a maxium of 512 characters long. 15 "instructions": "You are a helpful assistant.",
16 "tools": [],
17 "file_ids": [],
Returns 18 "metadata": {},
19 "usage": null
20 }
A run object.

List runs Beta

GET https://api.openai.com/v1/threads/{thread_id}/runs Example request python Copy

Returns a list of runs belonging to a thread. 1 from openai import OpenAI


2 client = OpenAI()
3
Path parameters 4 runs = client.beta.threads.runs.list(
5 "thread_abc123"
thread_id string Required 6 )
The ID of the thread the run belongs to. 7 print(runs)

Response Copy
Query parameters
limit integer Optional Defaults to 20 1 {
A limit on the number of objects to be returned. Limit can range between 1 and 100, and the 2 "object": "list",
default is 20. 3 "data": [
4 {
5 "id": "run_abc123",
order string Optional Defaults to desc 6 "object": "thread.run",
Sort order by the created_at timestamp of the objects. asc for ascending order and desc 7 "created_at": 1699075072,
for descending order. 8 "assistant_id": "asst_abc123",
9 "thread_id": "thread_abc123",

after string Optional 10 "status": "completed",

A cursor for use in pagination. after is an object ID that defines your place in the list. For 11 "started_at": 1699075072,
12 "expires_at": null,
instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent
13 "cancelled_at": null,
call can include after=obj_foo in order to fetch the next page of the list.
14 "failed_at": null,
15 "completed_at": 1699075073,
before string Optional 16 "last_error": null,
A cursor for use in pagination. before is an object ID that defines your place in the list. For 17 "model": "gpt-3.5-turbo",
instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent 18 "instructions": null,
call can include before=obj_foo in order to fetch the previous page of the list. 19 "tools": [
20 {
21 "type": "code_interpreter"
22 }
Returns 23 ],
24 "file_ids": [
A list of run objects. 25 "file-abc123",
26 "file-abc456"
27 ],
28 "metadata": {},
29 "usage": {
30 "prompt_tokens": 123,
31 "completion_tokens": 456,
32 "total_tokens": 579
33 }
34 },
35 {
36 "id": "run_abc456",
37 "object": "thread.run",
38 "created_at": 1699063290,
39 "assistant_id": "asst_abc123",
40 "thread_id": "thread_abc123",
41 "status": "completed",
42 "started_at": 1699063290,
43 "expires_at": null,
44 "cancelled_at": null,
45 "failed_at": null,
46 "completed_at": 1699063291,
47 "last_error": null,
48 "model": "gpt-3.5-turbo",
49 "instructions": null,
50 "tools": [
51 {
52 "type": "code_interpreter"
53 }
54 ],
55 "file_ids": [
56 "file-abc123",
57 "file-abc456"
58 ],
59 "metadata": {},
60 "usage": {
61 "prompt_tokens": 123,
62 "completion_tokens": 456,
63 "total_tokens": 579
64 }
65 }
66 ],
67 "first_id": "run_abc123",
68 "last_id": "run_abc456",
69 "has_more": false
70 }

List run steps Beta

GET https://api.openai.com/v1/threads/{thread_id}/runs/{run_id}/steps Example request python Copy

Returns a list of run steps belonging to a run. 1 from openai import OpenAI
2 client = OpenAI()
3
Path parameters 4 run_steps = client.beta.threads.runs.steps.list(
5 thread_id="thread_abc123",
thread_id string Required 6 run_id="run_abc123"
The ID of the thread the run and run steps belong to. 7 )
8 print(run_steps)

run_id string Required


The ID of the run the run steps belong to. Response Copy

1 {
2 "object": "list",
Query parameters 3 "data": [
4 {
limit integer Optional Defaults to 20 5 "id": "step_abc123",
A limit on the number of objects to be returned. Limit can range between 1 and 100, and the 6 "object": "thread.run.step",
default is 20. 7 "created_at": 1699063291,
8 "run_id": "run_abc123",
9 "assistant_id": "asst_abc123",
order string Optional Defaults to desc
10 "thread_id": "thread_abc123",
Sort order by the created_at timestamp of the objects. asc for ascending order and desc
11 "type": "message_creation",
for descending order.
12 "status": "completed",
13 "cancelled_at": null,
after string Optional 14 "completed_at": 1699063291,
A cursor for use in pagination. after is an object ID that defines your place in the list. For 15 "expired_at": null,
instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent 16 "failed_at": null,
call can include after=obj_foo in order to fetch the next page of the list. 17 "last_error": null,
18 "step_details": {
19 "type": "message_creation",
before string Optional
20 "message_creation": {
A cursor for use in pagination. before is an object ID that defines your place in the list. For 21 "message_id": "msg_abc123"
instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent 22 }
call can include before=obj_foo in order to fetch the previous page of the list. 23 },
24 "usage": {
25 "prompt_tokens": 123,
26 "completion_tokens": 456,
Returns 27 "total_tokens": 579
28 }
A list of run step objects. 29 }
30 ],
31 "first_id": "step_abc123",
32 "last_id": "step_abc456",
33 "has_more": false
34 }

Retrieve run Beta

GET https://api.openai.com/v1/threads/{thread_id}/runs/{run_id} Example request python Copy

Retrieves a run. 1 from openai import OpenAI


2 client = OpenAI()
3
Path parameters 4 run = client.beta.threads.runs.retrieve(
5 thread_id="thread_abc123",
thread_id string Required 6 run_id="run_abc123"
The ID of the thread that was run. 7 )
8 print(run)

run_id string Required


The ID of the run to retrieve. Response Copy

1 {
2 "id": "run_abc123",
Returns 3 "object": "thread.run",
4 "created_at": 1699075072,
The run object matching the specified ID. 5 "assistant_id": "asst_abc123",
6 "thread_id": "thread_abc123",
7 "status": "completed",
8 "started_at": 1699075072,
9 "expires_at": null,
10 "cancelled_at": null,
11 "failed_at": null,
12 "completed_at": 1699075073,
13 "last_error": null,
14 "model": "gpt-3.5-turbo",
15 "instructions": null,
16 "tools": [
17 {
18 "type": "code_interpreter"
19 }
20 ],
21 "file_ids": [
22 "file-abc123",
23 "file-abc456"
24 ],
25 "metadata": {},
26 "usage": {
27 "prompt_tokens": 123,
28 "completion_tokens": 456,
29 "total_tokens": 579
30 }
31 }

Retrieve run step Beta

GET https://api.openai.com/v1/threads/{thread_id}/runs/{run_id}/steps/{step Example request python Copy


_id}
1 from openai import OpenAI
Retrieves a run step. 2 client = OpenAI()
3
4 run_step = client.beta.threads.runs.steps.retrieve(
Path parameters 5 thread_id="thread_abc123",
6 run_id="run_abc123",
7 step_id="step_abc123"
thread_id string Required
8 )
The ID of the thread to which the run and run step belongs.
9 print(run_step)

run_id string Required


The ID of the run to which the run step belongs.
Response Copy

step_id string Required 1 {


The ID of the run step to retrieve. 2 "id": "step_abc123",
3 "object": "thread.run.step",
4 "created_at": 1699063291,
5 "run_id": "run_abc123",
Returns
6 "assistant_id": "asst_abc123",
7 "thread_id": "thread_abc123",
The run step object matching the specified ID. 8 "type": "message_creation",
9 "status": "completed",
10 "cancelled_at": null,
11 "completed_at": 1699063291,
12 "expired_at": null,
13 "failed_at": null,
14 "last_error": null,
15 "step_details": {
16 "type": "message_creation",
17 "message_creation": {
18 "message_id": "msg_abc123"
19 }
20 },
21 "usage": {
22 "prompt_tokens": 123,
23 "completion_tokens": 456,
24 "total_tokens": 579
25 }
26 }

Modify run Beta

POST https://api.openai.com/v1/threads/{thread_id}/runs/{run_id} Example request python Copy

Modifies a run. 1 from openai import OpenAI


2 client = OpenAI()
3
Path parameters 4 run = client.beta.threads.runs.update(
5 thread_id="thread_abc123",
thread_id string Required 6 run_id="run_abc123",
The ID of the thread that was run. 7 metadata={"user_id": "user_abc123"},
8 )
9 print(run)
run_id string Required
The ID of the run to modify.
Response Copy

1 {
Request body 2 "id": "run_abc123",
3 "object": "thread.run",
metadata map Optional 4 "created_at": 1699075072,
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing 5 "assistant_id": "asst_abc123",
additional information about the object in a structured format. Keys can be a maximum of 64 6 "thread_id": "thread_abc123",
characters long and values can be a maxium of 512 characters long. 7 "status": "completed",
8 "started_at": 1699075072,
9 "expires_at": null,
10 "cancelled_at": null,
Returns 11 "failed_at": null,
12 "completed_at": 1699075073,
The modified run object matching the specified ID. 13 "last_error": null,
14 "model": "gpt-3.5-turbo",
15 "instructions": null,
16 "tools": [
17 {
18 "type": "code_interpreter"
19 }
20 ],
21 "file_ids": [
22 "file-abc123",
23 "file-abc456"
24 ],
25 "metadata": {
26 "user_id": "user_abc123"
27 },
28 "usage": {
29 "prompt_tokens": 123,
30 "completion_tokens": 456,
31 "total_tokens": 579
32 }
33 }

Submit tool outputs to run Beta

POST https://api.openai.com/v1/threads/{thread_id}/runs/{run_id}/submit_too Example request python Copy


l_outputs
1 from openai import OpenAI
When a run has the status: "requires_action" and required_action.type is 2 client = OpenAI()
3
submit_tool_outputs , this endpoint can be used to submit the outputs from the tool
4 run = client.beta.threads.runs.submit_tool_outputs(
calls once they're all completed. All outputs must be submitted in a single request. 5 thread_id="thread_abc123",
6 run_id="run_abc123",
7 tool_outputs=[
Path parameters 8 {
9 "tool_call_id": "call_abc123",
thread_id string Required 10 "output": "28C"
The ID of the thread to which this run belongs. 11 }
12 ]
13 )
run_id string Required
14 print(run)
The ID of the run that requires the tool output submission.

Response Copy
Request body
1 {
tool_outputs array Required 2 "id": "run_abc123",
A list of tools for which the outputs are being submitted. 3 "object": "thread.run",
Show properties 4 "created_at": 1699075592,
5 "assistant_id": "asst_abc123",
6 "thread_id": "thread_abc123",
7 "status": "queued",
Returns 8 "started_at": 1699075592,
9 "expires_at": 1699076192,
The modified run object matching the specified ID. 10 "cancelled_at": null,
11 "failed_at": null,
12 "completed_at": null,
13 "last_error": null,
14 "model": "gpt-4",
15 "instructions": "You tell the weather.",
16 "tools": [
17 {
18 "type": "function",
19 "function": {
20 "name": "get_weather",
21 "description": "Determine weather in my loca
22 "parameters": {
23 "type": "object",
24 "properties": {
25 "location": {
26 "type": "string",
27 "description": "The city and state e.g
28 },
29 "unit": {
30 "type": "string",
31 "enum": [
32 "c",
33 "f"
34 ]
35 }
36 },
37 "required": [
38 "location"
39 ]
40 }
41 }
42 }
43 ],
44 "file_ids": [],
45 "metadata": {},
46 "usage": null
47 }

Cancel a run Beta

POST https://api.openai.com/v1/threads/{thread_id}/runs/{run_id}/cancel Example request python Copy

Cancels a run that is in_progress . 1 from openai import OpenAI


2 client = OpenAI()
3
Path parameters 4 run = client.beta.threads.runs.cancel(
5 thread_id="thread_abc123",

thread_id string Required 6 run_id="run_abc123"

The ID of the thread to which this run belongs. 7 )


8 print(run)

run_id string Required


The ID of the run to cancel. Response Copy

1 {
2 "id": "run_abc123",
Returns 3 "object": "thread.run",
4 "created_at": 1699076126,
The modified run object matching the specified ID. 5 "assistant_id": "asst_abc123",
6 "thread_id": "thread_abc123",
7 "status": "cancelling",
8 "started_at": 1699076126,
9 "expires_at": 1699076726,
10 "cancelled_at": null,
11 "failed_at": null,
12 "completed_at": null,
13 "last_error": null,
14 "model": "gpt-4",
15 "instructions": "You summarize books.",
16 "tools": [
17 {
18 "type": "retrieval"
19 }
20 ],
21 "file_ids": [],
22 "metadata": {},
23 "usage": null
24 }

The run object Beta

Represents an execution run on a thread. The run object Copy

id string 1 {
The identifier, which can be referenced in API endpoints. 2 "id": "run_abc123",
3 "object": "thread.run",
4 "created_at": 1698107661,
object string 5 "assistant_id": "asst_abc123",
The object type, which is always thread.run . 6 "thread_id": "thread_abc123",
7 "status": "completed",
created_at integer 8 "started_at": 1699073476,
The Unix timestamp (in seconds) for when the run was created. 9 "expires_at": null,
10 "cancelled_at": null,
11 "failed_at": null,
thread_id string 12 "completed_at": 1699073498,
The ID of the thread that was executed on as a part of this run. 13 "last_error": null,
14 "model": "gpt-4",
assistant_id string 15 "instructions": null,
The ID of the assistant used for execution of this run. 16 "tools": [{"type": "retrieval"}, {"type": "code_in
17 "file_ids": [],
18 "metadata": {},
status string 19 "usage": {
The status of the run, which can be either queued , in_progress , requires_action , 20 "prompt_tokens": 123,
cancelling , cancelled , failed , completed , or expired . 21 "completion_tokens": 456,
22 "total_tokens": 579
23 }
required_action object or null
24 }
Details on the action required to continue the run. Will be null if no action is required.
Show properties

last_error object or null


The last error associated with this run. Will be null if there are no errors.
Show properties

expires_at integer
The Unix timestamp (in seconds) for when the run will expire.

started_at integer or null


The Unix timestamp (in seconds) for when the run was started.

cancelled_at integer or null


The Unix timestamp (in seconds) for when the run was cancelled.

failed_at integer or null


The Unix timestamp (in seconds) for when the run failed.

completed_at integer or null


The Unix timestamp (in seconds) for when the run was completed.

model string
The model that the assistant used for this run.

instructions string
The instructions that the assistant used for this run.

tools array
The list of tools that the assistant used for this run.
Show possible types

file_ids array
The list of File IDs the assistant used for this run.

metadata map
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing
additional information about the object in a structured format. Keys can be a maximum of 64
characters long and values can be a maxium of 512 characters long.

usage object or null


Usage statistics related to the run. This value will be null if the run is not in a terminal state (i.e.
in_progress , queued , etc.).
Show properties

The run step object Beta

Represents a step in execution of a run. The run step object Copy

id string 1 {
The identifier of the run step, which can be referenced in API endpoints. 2 "id": "step_abc123",
3 "object": "thread.run.step",
4 "created_at": 1699063291,
object string 5 "run_id": "run_abc123",
The object type, which is always thread.run.step . 6 "assistant_id": "asst_abc123",
7 "thread_id": "thread_abc123",
created_at integer 8 "type": "message_creation",
The Unix timestamp (in seconds) for when the run step was created. 9 "status": "completed",
10 "cancelled_at": null,
11 "completed_at": 1699063291,
assistant_id string 12 "expired_at": null,
The ID of the assistant associated with the run step. 13 "failed_at": null,
14 "last_error": null,
thread_id string 15 "step_details": {
The ID of the thread that was run. 16 "type": "message_creation",
17 "message_creation": {
18 "message_id": "msg_abc123"
run_id string 19 }
The ID of the run that this run step is a part of. 20 },
21 "usage": {
type string 22 "prompt_tokens": 123,
The type of run step, which can be either message_creation or tool_calls . 23 "completion_tokens": 456,
24 "total_tokens": 579
25 }
status string 26 }
The status of the run step, which can be either in_progress , cancelled , failed ,
completed , or expired .

step_details object
The details of the run step.
Show possible types

last_error object or null


The last error associated with this run step. Will be null if there are no errors.
Show properties

expired_at integer or null


The Unix timestamp (in seconds) for when the run step expired. A step is considered expired if the
parent run is expired.

cancelled_at integer or null


The Unix timestamp (in seconds) for when the run step was cancelled.

failed_at integer or null


The Unix timestamp (in seconds) for when the run step failed.

completed_at integer or null


The Unix timestamp (in seconds) for when the run step completed.

metadata map
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing
additional information about the object in a structured format. Keys can be a maximum of 64
characters long and values can be a maxium of 512 characters long.
usage object or null
Usage statistics related to the run step. This value will be null while the run step's status is
in_progress .
Show properties

Completions Legacy

Given a prompt, the model will return one or more predicted completions along with the probabilities of
alternative tokens at each position. Most developer should use our Chat Completions API to leverage our
best and newest models. Most models that support the legacy Completions endpoint will be shut off on
January 4th, 2024.

Create completion Legacy


No streaming Streaming

POST https://api.openai.com/v1/completions
Example request gpt-3.5-turbo-instruct python Copy
Creates a completion for the provided prompt and parameters.
1 from openai import OpenAI
2 client = OpenAI()
Request body 3
4 client.completions.create(
5 model="gpt-3.5-turbo-instruct",
model string Required
6 prompt="Say this is a test",
ID of the model to use. You can use the List models API to see all of your available models, or see
7 max_tokens=7,
our Model overview for descriptions of them.
8 temperature=0
9 )
prompt string or array Required
The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens,
or array of token arrays. Response gpt-3.5-turbo-instruct Copy

Note that <|endoftext|> is the document separator that the model sees during training, so if a 1 {
prompt is not specified the model will generate as if from the beginning of a new document. 2 "id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7",
3 "object": "text_completion",
best_of integer or null Optional Defaults to 1 4 "created": 1589478378,
Generates best_of completions server-side and returns the "best" (the one with the highest 5 "model": "gpt-3.5-turbo-instruct",

log probability per token). Results cannot be streamed. 6 "system_fingerprint": "fp_44709d6fcb",


7 "choices": [
When used with n , best_of controls the number of candidate completions and n specifies 8 {
how many to return – best_of must be greater than n . 9 "text": "\n\nThis is indeed a test",
Note: Because this parameter generates many completions, it can quickly consume your token 10 "index": 0,
quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop . 11 "logprobs": null,
12 "finish_reason": "length"
13 }
echo boolean or null Optional Defaults to false 14 ],
Echo back the prompt in addition to the completion 15 "usage": {
16 "prompt_tokens": 5,

frequency_penalty number or null Optional Defaults to 0 17 "completion_tokens": 7,


Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing 18 "total_tokens": 12
19 }
frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
20 }
See more information about frequency and presence penalties.

logit_bias map Optional Defaults to null


Modify the likelihood of specified tokens appearing in the completion.

Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an
associated bias value from -100 to 100. You can use this tokenizer tool to convert text to token IDs.
Mathematically, the bias is added to the logits generated by the model prior to sampling. The
exact effect will vary per model, but values between -1 and 1 should decrease or increase
likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the
relevant token.

As an example, you can pass {"50256": -100} to prevent the <|endoftext|> token from being
generated.

logprobs integer or null Optional Defaults to null


Include the log probabilities on the logprobs most likely output tokens, as well the chosen
tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The
API will always return the logprob of the sampled token, so there may be up to logprobs+1
elements in the response.

The maximum value for logprobs is 5.

max_tokens integer or null Optional Defaults to 16


The maximum number of tokens that can be generated in the completion.

The token count of your prompt plus max_tokens cannot exceed the model's context length.
Example Python code for counting tokens.

n integer or null Optional Defaults to 1


How many completions to generate for each prompt.

Note: Because this parameter generates many completions, it can quickly consume your token
quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop .

presence_penalty number or null Optional Defaults to 0


Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear
in the text so far, increasing the model's likelihood to talk about new topics.

See more information about frequency and presence penalties.

seed integer or null Optional


If specified, our system will make a best effort to sample deterministically, such that repeated
requests with the same seed and parameters should return the same result.

Determinism is not guaranteed, and you should refer to the system_fingerprint response
parameter to monitor changes in the backend.

stop string / array / null Optional Defaults to null


Up to 4 sequences where the API will stop generating further tokens. The returned text will not
contain the stop sequence.
stream boolean or null Optional Defaults to false
Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent
events as they become available, with the stream terminated by a data: [DONE] message.
Example Python code.

suffix string or null Optional Defaults to null


The suffix that comes after a completion of inserted text.

temperature number or null Optional Defaults to 1


What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output
more random, while lower values like 0.2 will make it more focused and deterministic.

We generally recommend altering this or top_p but not both.

top_p number or null Optional Defaults to 1


An alternative to sampling with temperature, called nucleus sampling, where the model considers
the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the
top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

user string Optional


A unique identifier representing your end-user, which can help OpenAI to monitor and detect
abuse. Learn more.

Returns

Returns a completion object, or a sequence of completion objects if the request is streamed.

The completion object Legacy

Represents a completion response from the API. Note: both the streamed and non- The completion object Copy
streamed response objects share the same shape (unlike the chat endpoint).
1 {
2 "id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7",
id string
3 "object": "text_completion",
A unique identifier for the completion.

You might also like