0% found this document useful (0 votes)
2K views147 pages

Openrouter - Ai Docs Llms-Full

OpenRouter offers a unified API for accessing various AI models, simplifying integration with options for SDKs, direct API calls, or third-party frameworks. Users can manage their credits for model usage and benefit from automatic fallbacks for improved uptime. The documentation provides detailed guidance on getting started, API specifications, privacy policies, and billing systems.

Uploaded by

DARK
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2K views147 pages

Openrouter - Ai Docs Llms-Full

OpenRouter offers a unified API for accessing various AI models, simplifying integration with options for SDKs, direct API calls, or third-party frameworks. Users can manage their credits for model usage and benefit from automatic fallbacks for improved uptime. The documentation provides detailed guidance on getting started, API specifications, privacy policies, and billing systems.

Uploaded by

DARK
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 147

5/8/25, 2:53 AM openrouter.ai/docs/llms-full.

txt

# Quickstart

> Get started with OpenRouter's unified API for hundreds of AI models. Learn how to integrate
using OpenAI SDK, direct API calls, or third-party frameworks.

OpenRouter provides a unified API that gives you access to hundreds of AI models through a single
endpoint, while automatically handling fallbacks and selecting the most cost-effective options.
Get started with just a few lines of code using your preferred SDK or framework.

<Tip>
Want to chat with our docs? Download an LLM-friendly text file of our [full
documentation](/docs/llms-full.txt) and include it in your system prompt.
</Tip>

In the examples below, the OpenRouter-specific headers are optional. Setting them allows your app
to appear on the OpenRouter leaderboards.

## Using the OpenAI SDK

<CodeGroup>
```python title="Python"
from openai import OpenAI

client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key="<OPENROUTER_API_KEY>",
)

completion = client.chat.completions.create(
extra_headers={
"HTTP-Referer": "<YOUR_SITE_URL>", # Optional. Site URL for rankings on openrouter.ai.
"X-Title": "<YOUR_SITE_NAME>", # Optional. Site title for rankings on openrouter.ai.
},
model="openai/gpt-4o",
messages=[
{
"role": "user",
"content": "What is the meaning of life?"
}
]
)

print(completion.choices[0].message.content)
```

```typescript title="TypeScript"
import OpenAI from 'openai';

const openai = new OpenAI({


baseURL: 'https://openrouter.ai/api/v1',
apiKey: '<OPENROUTER_API_KEY>',
defaultHeaders: {
'HTTP-Referer': '<YOUR_SITE_URL>', // Optional. Site URL for rankings on openrouter.ai.
'X-Title': '<YOUR_SITE_NAME>', // Optional. Site title for rankings on openrouter.ai.
},
});

async function main() {


const completion = await openai.chat.completions.create({
model: 'openai/gpt-4o',
messages: [
{
role: 'user',
content: 'What is the meaning of life?',
},
],
});

console.log(completion.choices[0].message);
https://openrouter.ai/docs/llms-full.txt 1/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
}

main();
```
</CodeGroup>

## Using the OpenRouter API directly

<CodeGroup>
```python title="Python"
import requests
import json

response = requests.post(
url="https://openrouter.ai/api/v1/chat/completions",
headers={
"Authorization": "Bearer <OPENROUTER_API_KEY>",
"HTTP-Referer": "<YOUR_SITE_URL>", # Optional. Site URL for rankings on openrouter.ai.
"X-Title": "<YOUR_SITE_NAME>", # Optional. Site title for rankings on openrouter.ai.
},
data=json.dumps({
"model": "openai/gpt-4o", # Optional
"messages": [
{
"role": "user",
"content": "What is the meaning of life?"
}
]
})
)
```

```typescript title="TypeScript"
fetch('https://openrouter.ai/api/v1/chat/completions', {
method: 'POST',
headers: {
Authorization: 'Bearer <OPENROUTER_API_KEY>',
'HTTP-Referer': '<YOUR_SITE_URL>', // Optional. Site URL for rankings on openrouter.ai.
'X-Title': '<YOUR_SITE_NAME>', // Optional. Site title for rankings on openrouter.ai.
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'openai/gpt-4o',
messages: [
{
role: 'user',
content: 'What is the meaning of life?',
},
],
}),
});
```

```shell title="Shell"
curl https://openrouter.ai/api/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENROUTER_API_KEY" \
-d '{
"model": "openai/gpt-4o",
"messages": [
{
"role": "user",
"content": "What is the meaning of life?"
}
]
}'
```
</CodeGroup>

https://openrouter.ai/docs/llms-full.txt 2/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
The API also supports [streaming](/docs/api-reference/streaming).

## Using third-party SDKs

For information about using third-party SDKs and frameworks with OpenRouter, please [see our
frameworks documentation.](/docs/community/frameworks)

# Frequently Asked Questions

> Find answers to commonly asked questions about OpenRouter's unified API, model access, pricing,
and integration.

## Getting started

<AccordionGroup>
<Accordion title="Why should I use OpenRouter?">
OpenRouter provides a unified API to access all the major LLM models on the
market. It also allows users to aggregate their billing in one place and
keep track of all of their usage using our analytics.

OpenRouter passes through the pricing of the underlying providers, while pooling their uptime,
so you get the same pricing you'd get from the provider directly, with a
unified API and fallbacks so that you get much better uptime.
</Accordion>

<Accordion title="How do I get started with OpenRouter?">


To get started, create an account and add credits on the
[Credits](https://openrouter.ai/settings/credits) page. Credits are simply
deposits on OpenRouter that you use for LLM inference.
When you use the API or chat interface, we deduct the request cost from your
credits. Each model and provider has a different price per million tokens.

Once you have credits you can either use the chat room, or create API keys
and start using the API. You can read our [quickstart](/docs/quickstart)
guide for code samples and more.
</Accordion>

<Accordion title="How do I get support?">


The best way to get support is to join our
[Discord](https://discord.gg/fVyRaUDgxW) and ping us in the #help forum.
</Accordion>

<Accordion title="How do I get billed for my usage on OpenRouter?">


For each model we have the pricing displayed per million tokens. There is
usually a different price for prompt and completion tokens. There are also
models that charge per request, for images and for reasoning tokens. All of
these details will be visible on the models page.

When you make a request to OpenRouter, we receive the total number of tokens processed
by the provider. We then calculate the corresponding cost and deduct it from your credits.
You can review your complete usage history in the [Activity tab]
(https://openrouter.ai/activity).

You can also add the `usage: {include: true}` parameter to your chat request
to get the usage information in the response.

We pass through the pricing of the underlying providers; there is no markup


on inference pricing (however we do charge a [fee](https://openrouter.ai/terms#_4_-payment)
when purchasing credits).
</Accordion>
</AccordionGroup>

## Models and Providers

<AccordionGroup>
<Accordion title="What LLM models does OpenRouter support?">
OpenRouter provides access to a wide variety of LLM models, including frontier models from
major AI labs.

https://openrouter.ai/docs/llms-full.txt 3/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
For a complete list of models you can visit the [models browser](https://openrouter.ai/models)
or fetch the list through the [models api](https://openrouter.ai/api/v1/models).
</Accordion>

<Accordion title="How frequently are new models added?">


We work on adding models as quickly as we can. We often have partnerships with
the labs releasing models and can release models as soon as they are
available. If there is a model missing that you'd like OpenRouter to support, feel free to
message us on
[Discord](https://discord.gg/fVyRaUDgxW).
</Accordion>

<Accordion title="What are model variants?">


Variants are suffixes that can be added to the model slug to change its behavior.

Static variants can only be used with specific models and these are listed in our [models api]
(https://openrouter.ai/api/v1/models).

1. `:free` - The model is always provided for free and has low rate limits.
2. `:beta` - The model is not moderated by OpenRouter.
3. `:extended` - The model has longer than usual context length.
4. `:thinking` - The model supports reasoning by default.

Dynamic variants can be used on all models and they change the behavior of how the request is
routed or used.

1. `:online` - All requests will run a query to extract web results that are attached to the
prompt.
2. `:nitro` - Providers will be sorted by throughput rather than the default sort, optimizing
for faster response times.
3. `:floor` - Providers will be sorted by price rather than the default sort, prioritizing the
most cost-effective options.
</Accordion>

<Accordion title="I am an inference provider, how can I get listed on OpenRouter?">


You can read our requirements at the [Providers
page](/docs/use-cases/for-providers). If you would like to contact us, the best
place to reach us is over email.
</Accordion>

<Accordion title="What is the expected latency/response time for different models?">


For each model on OpenRouter we show the latency (time to first token) and the token
throughput for all providers. You can use this to estimate how long requests
will take. If you would like to optimize for throughput you can use the
`:nitro` variant to route to the fastest provider.
</Accordion>

<Accordion title="How does model fallback work if a provider is unavailable?">


If a provider returns an error OpenRouter will automatically fall back to the
next provider. This happens transparently to the user and allows production
apps to be much more resilient. OpenRouter has a lot of options to configure
the provider routing behavior. The full documentation can be found [here]
(/docs/features/provider-routing).
</Accordion>
</AccordionGroup>

## API Technical Specifications

<AccordionGroup>
<Accordion title="What authentication methods are supported?">
OpenRouter uses three authentication methods:

1. Cookie-based authentication for the web interface and chatroom


2. API keys (passed as Bearer tokens) for accessing the completions API and other core
endpoints
3. [Provisioning API keys](/docs/features/provisioning-api-keys) for programmatically managing
API keys through the key management endpoints
</Accordion>

https://openrouter.ai/docs/llms-full.txt 4/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
<Accordion title="How are rate limits calculated?">
For free models, rate limits are determined by the credits that you have purchased. If you
have
total credits purchased lower than {FREE_MODEL_CREDITS_THRESHOLD} credits, you will be rate
limited to {FREE_MODEL_NO_CREDITS_RPD} requests per day.
If you have purchased at least {FREE_MODEL_CREDITS_THRESHOLD} credits, you will be rate
limited to {FREE_MODEL_HAS_CREDITS_RPD} requests per day.

For all other models, rate limits are determined by the credits in your account. You can read
more
details in our [rate limits documentation](/docs/api-reference/limits).
</Accordion>

<Accordion title="What API endpoints are available?">


OpenRouter implements the OpenAI API specification for /completions and
/chat/completions endpoints, allowing you to use any model with the same
request/response format. Additional endpoints like /api/v1/models are also
available. See our [API documentation](/docs/api-reference/overview) for
detailed specifications.
</Accordion>

<Accordion title="What are the supported formats?">


The API supports text and images.
[Images](/docs/api-reference/overview#images--multimodal) can be passed as
URLs or base64 encoded images. PDF and other file types are coming soon.
</Accordion>

<Accordion title="How does streaming work?">


Streaming uses server-sent events (SSE) for real-time token delivery. Set
`stream: true` in your request to enable streaming responses.
</Accordion>

<Accordion title="What SDK support is available?">


OpenRouter is a drop-in replacement for OpenAI. Therefore, any SDKs that
support OpenAI by default also support OpenRouter. Check out our
[docs](/docs/frameworks) for more details.
</Accordion>
</AccordionGroup>

## Privacy and Data Logging

Please see our [Terms of Service](https://openrouter.ai/terms) and [Privacy Policy]


(https://openrouter.ai/privacy).

<AccordionGroup>
<Accordion title="What data is logged during API use?">
We log basic request metadata (timestamps, model used, token counts). Prompt
and completion are not logged by default. We do zero logging of your prompts/completions,
even if an error occurs, unless you opt-in to logging them.

We have an opt-in [setting](https://openrouter.ai/settings/privacy) that


lets users opt-in to log their prompts and completions in exchange for a 1%
discount on usage costs.
</Accordion>

<Accordion title="What data is logged during Chatroom use?">


The same data privacy applies to the chatroom as the API. All conversations
in the chatroom are stored locally on your device. Conversations will not sync across devices.
It is possible to export and import conversations using the settings menu in the chatroom.
</Accordion>

<Accordion title="What third-party sharing occurs?">


OpenRouter is a proxy that sends your requests to the model provider for it to be completed.
We work with all providers to, when possible, ensure that prompts and completions are not
logged or used for training.
Providers that do log, or where we have been unable to confirm their policy, will not be
routed to unless the model training
toggle is switched on in the [privacy settings](https://openrouter.ai/settings/privacy) tab.

https://openrouter.ai/docs/llms-full.txt 5/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
If you specify [provider routing](/docs/features/provider-routing) in your request, but none
of the providers
match the level of privacy specified in your account settings, you will get an error and your
request will not complete.
</Accordion>
</AccordionGroup>

## Credit and Billing Systems

<AccordionGroup>
<Accordion title="What purchase options exist?">
OpenRouter uses a credit system where the base currency is US dollars. All
of the pricing on our site and API is denoted in dollars. Users can top up
their balance manually or set up auto top up so that the balance is
replenished when it gets below the set threshold.
</Accordion>

<Accordion title="Do credits expire?">


Per our [terms](https://openrouter.ai/terms), we reserve the right to expire
unused credits after one year of purchase.
</Accordion>

<Accordion title="My credits haven't showed up in my account">


If you paid using Stripe, sometimes there is an issue with the Stripe
integration and credits can get delayed in showing up on your account. Please allow up to one
hour.
If your credits still have not appeared after an hour, contact us on [Discord]
(https://discord.gg/fVyRaUDgxW) and we will
look into it.

If you paid using crypto, please reach out to us on [Discord](https://discord.gg/fVyRaUDgxW)


and we will look into it.
</Accordion>

<Accordion title="What's the refund policy?">


Refunds for unused Credits may be requested within twenty-four (24) hours from the time the
transaction was processed. If no refund request is received within twenty-four (24) hours
following the purchase, any unused Credits become non-refundable. To request a refund within the
eligible period, you must email OpenRouter at [[email protected]]
(mailto:[email protected]). The unused credit amount will be refunded to your payment method;
the platform fees are non-refundable. Note that cryptocurrency payments are never refundable.
</Accordion>

<Accordion title="How to monitor credit usage?">


The [Activity](https://openrouter.ai/activity) page allows users to view
their historic usage and filter the usage by model, provider and api key.

We also provide a [credits api](/docs/api-reference/get-credits) that has


live information about the balance and remaining credits for the account.
</Accordion>

<Accordion title="What free tier options exist?">


All new users receive a very small free allowance to be able to test out OpenRouter.
There are many [free models](https://openrouter.ai/models?max_price=0) available
on OpenRouter, it is important to note that these models have low rate limits
({FREE_MODEL_NO_CREDITS_RPD} requests per day total)
and are usually not suitable for production use. If you have purchased at least
{FREE_MODEL_CREDITS_THRESHOLD} credits,
the free models will be limited to {FREE_MODEL_HAS_CREDITS_RPD} requests per day.
</Accordion>

<Accordion title="How do volume discounts work?">


OpenRouter does not currently offer volume discounts, but you can reach out to us
over email if you think you have an exceptional use case.
</Accordion>

<Accordion title="What payment methods are accepted?">


We accept all major credit cards, AliPay and cryptocurrency payments in
USDC. We are working on integrating PayPal soon, if there are any payment

https://openrouter.ai/docs/llms-full.txt 6/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
methods that you would like us to support please reach out on [Discord]
(https://discord.gg/fVyRaUDgxW).
</Accordion>

<Accordion title="How does OpenRouter make money?">


We charge a small [fee](https://openrouter.ai/terms#_4_-payment) when purchasing credits. We
never mark-up the pricing
of the underlying providers, and you'll always pay the same as the provider's
listed price.
</Accordion>
</AccordionGroup>

## Account Management

<AccordionGroup>
<Accordion title="How can I delete my account?">
Go to the [Settings](https://openrouter.ai/settings/preferences) page and click Manage
Account.
In the modal that opens, select the Security tab. You'll find an option there to delete your
account.

Note that unused credits will be lost and cannot be reclaimed if you delete and later recreate
your account.
</Accordion>

<Accordion title="How does team access work?">


Team management is coming very soon! For now you can use [provisioning API
keys](/docs/features/provisioning-api-keys) to allow sharing credits with
people on your team.
</Accordion>

<Accordion title="What analytics are available?">


Our [activity dashboard](https://openrouter.ai/activity) provides real-time
usage metrics. If you would like any specific reports or metrics please
contact us.
</Accordion>

<Accordion title="How can I contact support?">


The best way to reach us is to join our
[Discord](https://discord.gg/fVyRaUDgxW) and ping us in the #help forum.
</Accordion>
</AccordionGroup>

# Principles

> Learn about OpenRouter's guiding principles and mission. Understand our commitment to price
optimization, standardized APIs, and high availability in AI model deployment.

OpenRouter helps developers source and optimize AI usage. We believe the future is multi-model and
multi-provider.

## Why OpenRouter?

**Price and Performance**. OpenRouter scouts for the best prices, the lowest latencies, and the
highest throughput across dozens of providers, and lets you choose how to [prioritize]
(/docs/features/provider-routing) them.

**Standardized API**. No need to change code when switching between models or providers. You can
even let your users [choose and pay for their own](/docs/use-cases/oauth-pkce).

**Real-World Insights**. Be the first to take advantage of new models. See real-world data of [how
often models are used](https://openrouter.ai/rankings) for different purposes. Keep up to date in
our [Discord channel](https://discord.com/channels/1091220969173028894/1094454198688546826).

**Consolidated Billing**. Simple and transparent billing, regardless of how many providers you
use.

**Higher Availability**. Fallback providers, and automatic, smart routing means your requests

https://openrouter.ai/docs/llms-full.txt 7/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
still work even when providers go down.

**Higher Rate Limits**. OpenRouter works directly with providers to provide better rate limits and
more throughput.

# Models

> Access over 300 AI models through OpenRouter's unified API. Browse available models, compare
capabilities, and integrate with your preferred provider.

OpenRouter strives to provide access to every potentially useful text-based AI model. We currently
support over 300 models endpoints.

If there are models or providers you are interested in that OpenRouter doesn't have, please tell
us about them in our [Discord channel](https://discord.gg/fVyRaUDgxW).

<Note title="Different models tokenize text in different ways">


Some models break up text into chunks of multiple characters (GPT, Claude,
Llama, etc), while others tokenize by character (PaLM). This means that token
counts (and therefore costs) will vary between models, even when inputs and
outputs are the same. Costs are displayed and billed according to the
tokenizer for the model in use. You can use the `usage` field in the response
to get the token counts for the input and output.
</Note>

Explore and browse 300+ models and providers [on our website](https://openrouter.ai/models), or
[with our API](/docs/api-reference/list-available-models).

## For Providers

If you're interested in working with OpenRouter, you can learn more on our [providers page]
(/docs/use-cases/for-providers).

# Privacy, Logging, and Data Collection

> Learn how OpenRouter & its providers handle your data, including logging and data collection.

When using AI through OpenRouter, whether via the chat interface or the API, your prompts and
responses go through multiple touchpoints. You have control over how your data is handled at each
step.

This page is designed to give a practical overview of how your data is handled, stored, and used.
More information is available in the [privacy policy](/privacy) and [terms of service](/terms).

## Within OpenRouter

OpenRouter does not store your prompts or responses, *unless* you have explicitly opted in to
prompt logging in your account settings. It's as simple as that.

OpenRouter samples a small number of prompts for categorization to power our reporting and model
ranking. If you are not opted in to prompt logging, any categorization of your prompts is stored
completely anonymously and never associated with your account or user ID. The categorization is
done by model with a zero-data-retention policy.

OpenRouter does store metadata (e.g. number of prompt and completion tokens, latency, etc) for
each request. This is used to power our reporting and model ranking, and your [activity feed]
(/activity).

## Provider Policies

### Training on Prompts

Each provider on OpenRouter has its own data handling policies. We reflect those policies in
structured data on each AI endpoint that we offer.

On your account settings page, you can set whether you would like to allow routing to providers
that may train on your data (according to their own policies). There are separate settings for

https://openrouter.ai/docs/llms-full.txt 8/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
paid and free models.

Wherever possible, OpenRouter works with providers to ensure that prompts will not be trained on,
but there are exceptions. If you opt out of training in your account settings, OpenRouter will not
route to providers that train. This setting has no bearing on OpenRouter's own policies and what
we do with your prompts.

<Tip title="Data Policy Filtering">


You can [restrict individual requests](/docs/features/provider-routing#requiring-providers-to-
comply-with-data-policies)
to only use providers with a certain data policy.

This is also available as an account-wide setting in [your privacy settings]


(https://openrouter.ai/settings/privacy).
</Tip>

### Data Retention & Logging

Providers also have their own data retention policies, often for compliance reasons. OpenRouter
does not have routing rules that change based on data retention policies of providers, but the
retention policies as reflected in each provider's terms are shown below. Any user of OpenRouter
can ignore providers that don't meet their own data retention requirements.

The full terms of service for each provider are linked from the provider's page, and aggregated in
the [documentation](/docs/features/provider-routing#terms-of-service).

<ProviderDataRetentionTable />

# Model Routing

> Route requests dynamically between AI models. Learn how to use OpenRouter's Auto Router and
model fallback features for optimal performance and reliability.

OpenRouter provides two options for model routing.

## Auto Router

The [Auto Router](https://openrouter.ai/openrouter/auto), a special model ID that you can use to


choose between selected high-quality models based on your prompt, powered by [NotDiamond]
(https://www.notdiamond.ai/).

```json
{
"model": "openrouter/auto",
... // Other params
}
```

The resulting generation will have `model` set to the model that was used.

## The `models` parameter

The `models` parameter lets you automatically try other models if the primary model's providers
are down, rate-limited, or refuse to reply due to content moderation.

```json
{
"models": ["anthropic/claude-3.5-sonnet", "gryphe/mythomax-l2-13b"],
... // Other params
}
```

If the model you selected returns an error, OpenRouter will try to use the fallback model instead.
If the fallback model is down or returns an error, OpenRouter will return that error.

By default, any error can trigger the use of a fallback model, including context length validation
errors, moderation flags for filtered models, rate-limiting, and downtime.

https://openrouter.ai/docs/llms-full.txt 9/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
Requests are priced using the model that was ultimately used, which will be returned in the
`model` attribute of the response body.

## Using with OpenAI SDK

To use the `models` array with the OpenAI SDK, include it in the `extra_body` parameter. In the
example below, gpt-4o will be tried first, and the `models` array will be tried in order as
fallbacks.

<Template
data={{
API_KEY_REF,
}}
>
<CodeGroup>
```typescript
import OpenAI from 'openai';

const openrouterClient = new OpenAI({


baseURL: 'https://openrouter.ai/api/v1',
// API key and headers
});

async function main() {


// @ts-expect-error
const completion = await openrouterClient.chat.completions.create({
model: 'openai/gpt-4o',
models: ['anthropic/claude-3.5-sonnet', 'gryphe/mythomax-l2-13b'],
messages: [
{
role: 'user',
content: 'What is the meaning of life?',
},
],
});
console.log(completion.choices[0].message);
}

main();
```

```python
from openai import OpenAI

openai_client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key={{API_KEY_REF}},
)

completion = openai_client.chat.completions.create(
model="openai/gpt-4o",
extra_body={
"models": ["anthropic/claude-3.5-sonnet", "gryphe/mythomax-l2-13b"],
},
messages=[
{
"role": "user",
"content": "What is the meaning of life?"
}
]
)

print(completion.choices[0].message.content)
```
</CodeGroup>
</Template>

# Provider Routing

https://openrouter.ai/docs/llms-full.txt 10/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

> Route AI model requests across multiple providers intelligently. Learn how to optimize for cost,
performance, and reliability with OpenRouter's provider routing.

OpenRouter routes requests to the best available providers for your model. By default, [requests
are load balanced](#load-balancing-default-strategy) across the top providers to maximize uptime.

You can customize how your requests are routed using the `provider` object in the request body for
[Chat Completions](/docs/api-reference/chat-completion) and [Completions](/docs/api-
reference/completion).

<Tip>
For a complete list of valid provider names to use in the API, see the [full
provider schema](#json-schema-for-provider-preferences).
</Tip>

The `provider` object can contain the following fields:

| Field | Type | Default | Description


|
| -------------------- | ----------------- | ------- | -------------------------------------------
-------------------------------------------------------------------------------------- |
| `order` | string\[] | - | List of provider names to try in order
(e.g. `["Anthropic", "OpenAI"]`). [Learn more](#ordering-specific-providers) |
| `allow_fallbacks` | boolean | `true` | Whether to allow backup providers when the
primary is unavailable. [Learn more](#disabling-fallbacks) |
| `require_parameters` | boolean | `false` | Only use providers that support all
parameters in your request. [Learn more](#requiring-providers-to-support-all-parameters-beta) |
| `data_collection` | "allow" \| "deny" | "allow" | Control whether to use providers that may
store data. [Learn more](#requiring-providers-to-comply-with-data-policies) |
| `only` | string\[] | - | List of provider names to allow for this
request. [Learn more](#allowing-only-specific-providers) |
| `ignore` | string\[] | - | List of provider names to skip for this
request. [Learn more](#ignoring-providers) |
| `quantizations` | string\[] | - | List of quantization levels to filter by
(e.g. `["int4", "int8"]`). [Learn more](#quantization) |
| `sort` | string | - | Sort providers by price or throughput.
(e.g. `"price"` or `"throughput"`). [Learn more](#provider-sorting) |
| `max_price` | object | - | The maximum pricing you want to pay for
this request. [Learn more](#maximum-price) |

## Price-Based Load Balancing (Default Strategy)

For each model in your request, OpenRouter's default behavior is to load balance requests across
providers, prioritizing price.

If you are more sensitive to throughput than price, you can use the `sort` field to explicitly
prioritize throughput.

<Tip>
When you send a request with `tools` or `tool_choice`, OpenRouter will only
route to providers that support tool use. Similarly, if you set a
`max_tokens`, then OpenRouter will only route to providers that support a
response of that length.
</Tip>

Here is OpenRouter's default load balancing strategy:

1. Prioritize providers that have not seen significant outages in the last 30 seconds.
2. For the stable providers, look at the lowest-cost candidates and select one weighted by inverse
square of the price (example below).
3. Use the remaining providers as fallbacks.

<Note title="A Load Balancing Example">


If Provider A costs \$1 per million tokens, Provider B costs \$2, and Provider C costs \$3, and
Provider B recently saw a few outages.

* Your request is routed to Provider A. Provider A is 9x more likely to be first routed to


Provider A than Provider C because $(1 / 3^2 = 1/9)$ (inverse square of the price).

https://openrouter.ai/docs/llms-full.txt 11/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
* If Provider A fails, then Provider C will be tried next.
* If Provider C also fails, Provider B will be tried last.
</Note>

If you have `sort` or `order` set in your provider preferences, load balancing will be disabled.

## Provider Sorting

As described above, OpenRouter load balances based on price, while taking uptime into account.

If you instead want to *explicitly* prioritize a particular provider attribute, you can include
the `sort` field in the `provider` preferences. Load balancing will be disabled, and the router
will try providers in order.

The three sort options are:

* `"price"`: prioritize lowest price


* `"throughput"`: prioritize highest throughput
* `"latency"`: prioritize lowest latency

<TSFetchCodeBlock
title="Example with Fallbacks Enabled"
uriPath="/api/v1/chat/completions"
body={{
model: 'meta-llama/llama-3.1-70b-instruct',
messages: [{ role: 'user', content: 'Hello' }],
provider: {
sort: 'throughput',
},
}}
/>

To *always* prioritize low prices, and not apply any load balancing, set `sort` to `"price"`.

To *always* prioritize low latency, and not apply any load balancing, set `sort` to `"latency"`.

## Nitro Shortcut

You can append `:nitro` to any model slug as a shortcut to sort by throughput. This is exactly
equivalent to setting `provider.sort` to `"throughput"`.

<TSFetchCodeBlock
title="Example using Nitro shortcut"
uriPath="/api/v1/chat/completions"
body={{
model: 'meta-llama/llama-3.1-70b-instruct:nitro',
messages: [{ role: 'user', content: 'Hello' }],
}}
/>

## Floor Price Shortcut

You can append `:floor` to any model slug as a shortcut to sort by price. This is exactly
equivalent to setting `provider.sort` to `"price"`.

<TSFetchCodeBlock
title="Example using Floor shortcut"
uriPath="/api/v1/chat/completions"
body={{
model: 'meta-llama/llama-3.1-70b-instruct:floor',
messages: [{ role: 'user', content: 'Hello' }],
}}
/>

## Ordering Specific Providers

You can set the providers that OpenRouter will prioritize for your request using the `order`
field.

https://openrouter.ai/docs/llms-full.txt 12/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
| Field | Type | Default | Description
|
| ------- | --------- | ------- | ----------------------------------------------------------------
-------- |
| `order` | string\[] | - | List of provider names to try in order (e.g. `["Anthropic",
"OpenAI"]`). |

The router will prioritize providers in this list, and in this order, for the model you're using.
If you don't set this field, the router will [load balance](#load-balancing-default-strategy)
across the top providers to maximize uptime.

OpenRouter will try them one at a time and proceed to other providers if none are operational. If
you don't want to allow any other providers, you should [disable fallbacks](#disabling-fallbacks)
as well.

### Example: Specifying providers with fallbacks

This example skips over OpenAI (which doesn't host Mixtral), tries Together, and then falls back
to the normal list of providers on OpenRouter:

<TSFetchCodeBlock
title="Example with Fallbacks Enabled"
uriPath="/api/v1/chat/completions"
body={{
model: 'mistralai/mixtral-8x7b-instruct',
messages: [{ role: 'user', content: 'Hello' }],
provider: {
order: ['OpenAI', 'Together'],
},
}}
/>

### Example: Specifying providers with fallbacks disabled

Here's an example with `allow_fallbacks` set to `false` that skips over OpenAI (which doesn't host
Mixtral), tries Together, and then fails if Together fails:

<TSFetchCodeBlock
title="Example with Fallbacks Disabled"
uriPath="/api/v1/chat/completions"
body={{
model: 'mistralai/mixtral-8x7b-instruct',
messages: [{ role: 'user', content: 'Hello' }],
provider: {
order: ['OpenAI', 'Together'],
allow_fallbacks: false,
},
}}
/>

## Requiring Providers to Support All Parameters

You can restrict requests only to providers that support all parameters in your request using the
`require_parameters` field.

| Field | Type | Default | Description


|
| -------------------- | ------- | ------- | -----------------------------------------------------
---------- |
| `require_parameters` | boolean | `false` | Only use providers that support all parameters in
your request. |

With the default routing strategy, providers that don't support all the [LLM parameters]
(/docs/api-reference/parameters) specified in your request can still receive the request, but will
ignore unknown parameters. When you set `require_parameters` to `true`, the request won't even be
routed to that provider.

### Example: Excluding providers that don't support JSON formatting

https://openrouter.ai/docs/llms-full.txt 13/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
For example, to only use providers that support JSON formatting:

<TSFetchCodeBlock
uriPath="/api/v1/chat/completions"
body={{
messages: [{ role: 'user', content: 'Hello' }],
provider: {
require_parameters: true,
},
response_format: { type: 'json_object' },
}}
/>

## Requiring Providers to Comply with Data Policies

You can restrict requests only to providers that comply with your data policies using the
`data_collection` field.

| Field | Type | Default | Description


|
| ----------------- | ----------------- | ------- | ----------------------------------------------
------- |
| `data_collection` | "allow" \| "deny" | "allow" | Control whether to use providers that may
store data. |

* `allow`: (default) allow providers which store user data non-transiently and may train on it
* `deny`: use only providers which do not collect user data

Some model providers may log prompts, so we display them with a **Data Policy** tag on model
pages. This is not a definitive source of third party data policies, but represents our best
knowledge.

<Tip title="Account-Wide Data Policy Filtering">


This is also available as an account-wide setting in [your privacy
settings](https://openrouter.ai/settings/privacy). You can disable third party
model providers that store inputs for training.
</Tip>

### Example: Excluding providers that don't comply with data policies

To exclude providers that don't comply with your data policies, set `data_collection` to `deny`:

<TSFetchCodeBlock
uriPath="/api/v1/chat/completions"
body={{
messages: [{ role: 'user', content: 'Hello' }],
provider: {
data_collection: 'deny', // or "allow"
},
}}
/>

## Disabling Fallbacks

To guarantee that your request is only served by the top (lowest-cost) provider, you can disable
fallbacks.

This is combined with the `order` field from [Ordering Specific Providers](#ordering-specific-
providers) to restrict the providers that OpenRouter will prioritize to just your chosen list.

<TSFetchCodeBlock
uriPath="/api/v1/chat/completions"
body={{
messages: [{ role: 'user', content: 'Hello' }],
provider: {
allow_fallbacks: false,
},
}}
/>

https://openrouter.ai/docs/llms-full.txt 14/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

## Allowing Only Specific Providers

You can allow only specific providers for a request by setting the `only` field in the `provider`
object.

| Field | Type | Default | Description |


| ------ | --------- | ------- | ------------------------------------------------- |
| `only` | string\[] | - | List of provider names to allow for this request. |

<Warning>
Only allowing some providers may significantly reduce fallback options and
limit request recovery.
</Warning>

<Tip title="Account-Wide Allowed Providers">


You can allow providers for all account requests by configuring your [preferences]
(/settings/preferences). This configuration applies to all API requests and chatroom messages.

Note that when you allow providers for a specific request, the list of allowed providers is
merged with your account-wide allowed providers.
</Tip>

### Example: Allowing Azure for a request calling GPT-4 Omni

Here's an example that will only use Azure for a request calling GPT-4 Omni:

<TSFetchCodeBlock
uriPath="/api/v1/chat/completions"
body={{
model: 'openai/gpt-4o',
messages: [{ role: 'user', content: 'Hello' }],
provider: {
only: ['Azure'],
},
}}
/>

## Ignoring Providers

You can ignore providers for a request by setting the `ignore` field in the `provider` object.

| Field | Type | Default | Description |


| -------- | --------- | ------- | ------------------------------------------------ |
| `ignore` | string\[] | - | List of provider names to skip for this request. |

<Warning>
Ignoring multiple providers may significantly reduce fallback options and
limit request recovery.
</Warning>

<Tip title="Account-Wide Ignored Providers">


You can ignore providers for all account requests by configuring your [preferences]
(/settings/preferences). This configuration applies to all API requests and chatroom messages.

Note that when you ignore providers for a specific request, the list of ignored providers is
merged with your account-wide ignored providers.
</Tip>

### Example: Ignoring Azure for a request calling GPT-4 Omni

Here's an example that will ignore Azure for a request calling GPT-4 Omni:

<TSFetchCodeBlock
uriPath="/api/v1/chat/completions"
body={{
model: 'openai/gpt-4o',
messages: [{ role: 'user', content: 'Hello' }],
provider: {

https://openrouter.ai/docs/llms-full.txt 15/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
ignore: ['Azure'],
},
}}
/>

## Quantization

Quantization reduces model size and computational requirements while aiming to preserve
performance. Most LLMs today use FP16 or BF16 for training and inference, cutting memory
requirements in half compared to FP32. Some optimizations use FP8 or quantization to reduce size
further (e.g., INT8, INT4).

| Field | Type | Default | Description


|
| --------------- | --------- | ------- | --------------------------------------------------------
--------------------------------------- |
| `quantizations` | string\[] | - | List of quantization levels to filter by (e.g. `["int4",
"int8"]`). [Learn more](#quantization) |

<Warning>
Quantized models may exhibit degraded performance for certain prompts,
depending on the method used.
</Warning>

Providers can support various quantization levels for open-weight models.

### Quantization Levels

By default, requests are load-balanced across all available providers, ordered by price. To filter
providers by quantization level, specify the `quantizations` field in the `provider` parameter
with the following values:

* `int4`: Integer (4 bit)


* `int8`: Integer (8 bit)
* `fp4`: Floating point (4 bit)
* `fp6`: Floating point (6 bit)
* `fp8`: Floating point (8 bit)
* `fp16`: Floating point (16 bit)
* `bf16`: Brain floating point (16 bit)
* `fp32`: Floating point (32 bit)
* `unknown`: Unknown

### Example: Requesting FP8 Quantization

Here's an example that will only use providers that support FP8 quantization:

<TSFetchCodeBlock
uriPath="/api/v1/chat/completions"
body={{
model: 'meta-llama/llama-3.1-8b-instruct',
messages: [{ role: 'user', content: 'Hello' }],
provider: {
quantizations: ['fp8'],
},
}}
/>

### Max Price

To filter providers by price, specify the `max_price` field in the `provider` parameter with a
JSON object specifying the highest provider pricing you will accept.

For example, the value `{"prompt": 1, "completion": 2}` will route to any provider with a price of
`<= $1/m` prompt tokens, and `<= $2/m` completion tokens or less.

Some providers support per request pricing, in which case you can use the `request` attribute of
max\_price. Lastly, `image` is also available, which specifies the max price per image you will
accept.

https://openrouter.ai/docs/llms-full.txt 16/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
Practically, this field is often combined with a provider `sort` to express, for example, "Use the
provider with the highest throughput, as long as it doesn't cost more than `$x/m` tokens."

## Terms of Service

You can view the terms of service for each provider below. You may not violate the terms of
service or policies of third-party providers that power the models on OpenRouter.

* `AI21`: [https://www.ai21.com/terms-of-service/](https://www.ai21.com/terms-of-service/)
* `AionLabs`: [https://www.aionlabs.ai/terms/](https://www.aionlabs.ai/terms/)
* `Alibaba`: [https://www.alibabacloud.com/help/en/legal/latest/alibaba-cloud-international-
website-product-terms-of-service-v-3-8-0]
(https://www.alibabacloud.com/help/en/legal/latest/alibaba-cloud-international-website-product-
terms-of-service-v-3-8-0)
* `Amazon Bedrock`: [https://aws.amazon.com/service-terms/](https://aws.amazon.com/service-terms/)
* `Anthropic`: [https://www.anthropic.com/legal/commercial-terms]
(https://www.anthropic.com/legal/commercial-terms)
* `Atoma`: [https://atoma.network/terms\_of\_service](https://atoma.network/terms_of_service)
* `Avian.io`: [https://avian.io/terms](https://avian.io/terms)
* `Azure`: [https://www.microsoft.com/en-us/legal/terms-of-use?oneroute=true]
(https://www.microsoft.com/en-us/legal/terms-of-use?oneroute=true)
* `CentML`: [https://centml.ai/terms-of-service/](https://centml.ai/terms-of-service/)
* `Cerebras`: [https://www.cerebras.ai/terms-of-service](https://www.cerebras.ai/terms-of-service)
* `Chutes`: [https://chutes.ai/tos](https://chutes.ai/tos)
* `Cloudflare`: [https://www.cloudflare.com/service-specific-terms-developer-platform/#developer-
platform-terms](https://www.cloudflare.com/service-specific-terms-developer-platform/#developer-
platform-terms)
* `Cohere`: [https://cohere.com/terms-of-use](https://cohere.com/terms-of-use)
* `Crusoe`: [https://legal.crusoe.ai/open-router#managed-inference-tos-open-router]
(https://legal.crusoe.ai/open-router#managed-inference-tos-open-router)
* `DeepInfra`: [https://deepinfra.com/terms](https://deepinfra.com/terms)
* `DeepSeek`: [https://chat.deepseek.com/downloads/DeepSeek%20Terms%20of%20Use.html]
(https://chat.deepseek.com/downloads/DeepSeek%20Terms%20of%20Use.html)
* `Enfer`: [https://enfer.ai/privacy-policy](https://enfer.ai/privacy-policy)
* `Featherless`: [https://featherless.ai/terms](https://featherless.ai/terms)
* `Fireworks`: [https://fireworks.ai/terms-of-service](https://fireworks.ai/terms-of-service)
* `Friendli`: [https://friendli.ai/terms-of-service](https://friendli.ai/terms-of-service)
* `GMICloud`: [https://docs.gmicloud.ai/privacy](https://docs.gmicloud.ai/privacy)
* `Google Vertex`: [https://cloud.google.com/terms/](https://cloud.google.com/terms/)
* `Google AI Studio`: [https://cloud.google.com/terms/](https://cloud.google.com/terms/)
* `Groq`: [https://groq.com/terms-of-use/](https://groq.com/terms-of-use/)
* `Hyperbolic`: [https://hyperbolic.xyz/terms](https://hyperbolic.xyz/terms)
* `Inception`: [https://www.inceptionlabs.ai/terms](https://www.inceptionlabs.ai/terms)
* `inference.net`: [https://inference.net/terms-of-service](https://inference.net/terms-of-
service)
* `Infermatic`: [https://infermatic.ai/terms-and-conditions/](https://infermatic.ai/terms-and-
conditions/)
* `Inflection`: [https://developers.inflection.ai/tos](https://developers.inflection.ai/tos)
* `InoCloud`: [https://inocloud.com/terms](https://inocloud.com/terms)
* `kluster.ai`: [https://www.kluster.ai/terms-of-use](https://www.kluster.ai/terms-of-use)
* `Lambda`: [https://lambda.ai/legal/terms-of-service](https://lambda.ai/legal/terms-of-service)
* `Liquid`: [https://www.liquid.ai/terms-conditions](https://www.liquid.ai/terms-conditions)
* `Mancer`: [https://mancer.tech/terms](https://mancer.tech/terms)
* `Mancer (private)`: [https://mancer.tech/terms](https://mancer.tech/terms)
* `Minimax`: [https://www.minimax.io/platform/protocol/terms-of-service]
(https://www.minimax.io/platform/protocol/terms-of-service)
* `Mistral`: [https://mistral.ai/terms/#terms-of-use](https://mistral.ai/terms/#terms-of-use)
* `nCompass`: [https://ncompass.tech/terms](https://ncompass.tech/terms)
* `Nebius AI Studio`: [https://docs.nebius.com/legal/studio/terms-of-use/]
(https://docs.nebius.com/legal/studio/terms-of-use/)
* `NextBit`: [https://www.nextbit256.com/docs/terms-of-service]
(https://www.nextbit256.com/docs/terms-of-service)
* `Nineteen`: [https://nineteen.ai/tos](https://nineteen.ai/tos)
* `NovitaAI`: [https://novita.ai/legal/terms-of-service](https://novita.ai/legal/terms-of-service)
* `OpenAI`: [https://openai.com/policies/row-terms-of-use/](https://openai.com/policies/row-terms-
of-use/)
* `OpenInference`: [https://www.openinference.xyz/terms](https://www.openinference.xyz/terms)
* `Parasail`: [https://www.parasail.io/legal/terms](https://www.parasail.io/legal/terms)
* `Perplexity`: [https://www.perplexity.ai/hub/legal/perplexity-api-terms-of-service]

https://openrouter.ai/docs/llms-full.txt 17/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
(https://www.perplexity.ai/hub/legal/perplexity-api-terms-of-service)
* `Phala`: [https://red-pill.ai/terms](https://red-pill.ai/terms)
* `SambaNova`: [https://sambanova.ai/terms-and-conditions](https://sambanova.ai/terms-and-
conditions)
* `Targon`: [https://targon.com/terms](https://targon.com/terms)
* `Together`: [https://www.together.ai/terms-of-service](https://www.together.ai/terms-of-service)
* `Ubicloud`: [https://www.ubicloud.com/docs/about/terms-of-service]
(https://www.ubicloud.com/docs/about/terms-of-service)
* `Venice`: [https://venice.ai/legal/tos](https://venice.ai/legal/tos)
* `xAI`: [https://x.ai/legal/terms-of-service](https://x.ai/legal/terms-of-service)

## JSON Schema for Provider Preferences

For a complete list of options, see this JSON schema:

<ZodToJSONSchemaBlock title="Provider Preferences Schema" schema={ProviderPreferencesSchema} />

# Prompt Caching

> Reduce your AI model costs with OpenRouter's prompt caching feature. Learn how to cache and
reuse responses across OpenAI, Anthropic Claude, and DeepSeek models.

To save on inference costs, you can enable prompt caching on supported providers and models.

Most providers automatically enable prompt caching, but note that some (see Anthropic below)
require you to enable it on a per-message basis.

When using caching (whether automatically in supported models, or via the `cache_control` header),
OpenRouter will make a best-effort to continue routing to the same provider to make use of the
warm cache. In the event that the provider with your cached prompt is not available, OpenRouter
will try the next-best provider.

## Inspecting cache usage

To see how much caching saved on each generation, you can:

1. Click the detail button on the [Activity](/activity) page


2. Use the `/api/v1/generation` API, [documented here](/api-reference/overview#querying-cost-and-
stats)
3. Use `usage: {include: true}` in your request to get the cache tokens at the end of the response
(see [Usage Accounting](/use-cases/usage-accounting) for details)

The `cache_discount` field in the response body will tell you how much the response saved on cache
usage. Some providers, like Anthropic, will have a negative discount on cache writes, but a
positive discount (which reduces total cost) on cache reads.

## OpenAI

Caching price changes:

* **Cache writes**: no cost


* **Cache reads**: charged at {OPENAI_CACHE_READ_MULTIPLIER}x the price of the original input
pricing

Prompt caching with OpenAI is automated and does not require any additional configuration. There
is a minimum prompt size of 1024 tokens.

[Click here to read more about OpenAI prompt caching and its limitation.]
(https://openai.com/index/api-prompt-caching/)

## Anthropic Claude

Caching price changes:

* **Cache writes**: charged at {ANTHROPIC_CACHE_WRITE_MULTIPLIER}x the price of the original input


pricing
* **Cache reads**: charged at {ANTHROPIC_CACHE_READ_MULTIPLIER}x the price of the original input
pricing

https://openrouter.ai/docs/llms-full.txt 18/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

Prompt caching with Anthropic requires the use of `cache_control` breakpoints. There is a limit of
four breakpoints, and the cache will expire within five minutes. Therefore, it is recommended to
reserve the cache breakpoints for large bodies of text, such as character cards, CSV data, RAG
data, book chapters, etc.

[Click here to read more about Anthropic prompt caching and its limitation.]
(https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching)

The `cache_control` breakpoint can only be inserted into the text part of a multipart message.

System message caching example:

```json
{
"messages": [
{
"role": "system",
"content": [
{
"type": "text",
"text": "You are a historian studying the fall of the Roman Empire. You know the
following book very well:"
},
{
"type": "text",
"text": "HUGE TEXT BODY",
"cache_control": {
"type": "ephemeral"
}
}
]
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "What triggered the collapse?"
}
]
}
]
}
```

User message caching example:

```json
{
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Given the book below:"
},
{
"type": "text",
"text": "HUGE TEXT BODY",
"cache_control": {
"type": "ephemeral"
}
},
{
"type": "text",
"text": "Name all the characters in the above book"
}

https://openrouter.ai/docs/llms-full.txt 19/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
]
}
]
}
```

## DeepSeek

Caching price changes:

* **Cache writes**: charged at the same price as the original input pricing
* **Cache reads**: charged at {DEEPSEEK_CACHE_READ_MULTIPLIER}x the price of the original input
pricing

Prompt caching with DeepSeek is automated and does not require any additional configuration.

## Google Gemini

### Pricing Changes for Cached Requests:

* **Cache Writes:** Charged at the input token cost plus 5 minutes of cache storage, calculated as
follows:

```
Cache write cost = Input token price + (Cache storage price × (5 minutes / 60 minutes))
```

* **Cache Reads:** Charged at {GOOGLE_CACHE_READ_MULTIPLIER}× the original input token cost.

### Supported Models and Limitations:

Only certain Gemini models support caching. Please consult Google's [Gemini API Pricing
Documentation](https://ai.google.dev/gemini-api/docs/pricing) for the most current details.

Cache Writes have a 5 minute Time-to-Live (TTL) that does not update. After 5 minutes, the cache
expires and a new cache must be written.

Gemini models have a 4,096 token minimum for cache write to occur. Cached tokens count towards the
model's maximum token usage.

### How Gemini Prompt Caching works on OpenRouter:

OpenRouter simplifies Gemini cache management, abstracting away complexities:

* You **do not** need to manually create, update, or delete caches.


* You **do not** need to manage cache names or TTL explicitly.

### How to Enable Gemini Prompt Caching:

Gemini caching in OpenRouter requires you to insert `cache_control` breakpoints explicitly within
message content, similar to Anthropic. We recommend using caching primarily for large content
pieces (such as CSV files, lengthy character cards, retrieval augmented generation (RAG) data, or
extensive textual sources).

<Tip>
There is not a limit on the number of `cache_control` breakpoints you can
include in your request. OpenRouter will use only the last breakpoint for
Gemini caching. Including multiple breakpoints is safe and can help maintain
compatibility with Anthropic, but only the final one will be used for Gemini.
</Tip>

### Examples:

#### System Message Caching Example

```json
{
"messages": [
{

https://openrouter.ai/docs/llms-full.txt 20/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
"role": "system",
"content": [
{
"type": "text",
"text": "You are a historian studying the fall of the Roman Empire. Below is an
extensive reference book:"
},
{
"type": "text",
"text": "HUGE TEXT BODY HERE",
"cache_control": {
"type": "ephemeral"
}
}
]
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "What triggered the collapse?"
}
]
}
]
}
```

#### User Message Caching Example

```json
{
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Based on the book text below:"
},
{
"type": "text",
"text": "HUGE TEXT BODY HERE",
"cache_control": {
"type": "ephemeral"
}
},
{
"type": "text",
"text": "List all main characters mentioned in the text above."
}
]
}
]
}
```

# Structured Outputs

> Enforce JSON Schema validation on AI model responses. Get consistent, type-safe outputs and
avoid parsing errors with OpenRouter's structured output feature.

OpenRouter supports structured outputs for compatible models, ensuring responses follow a specific
JSON Schema format. This feature is particularly useful when you need consistent, well-formatted
responses that can be reliably parsed by your application.

## Overview

https://openrouter.ai/docs/llms-full.txt 21/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

Structured outputs allow you to:

* Enforce specific JSON Schema validation on model responses


* Get consistent, type-safe outputs
* Avoid parsing errors and hallucinated fields
* Simplify response handling in your application

## Using Structured Outputs

To use structured outputs, include a `response_format` parameter in your request, with `type` set
to `json_schema` and the `json_schema` object containing your schema:

```typescript
{
"messages": [
{ "role": "user", "content": "What's the weather like in London?" }
],
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "weather",
"strict": true,
"schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City or location name"
},
"temperature": {
"type": "number",
"description": "Temperature in Celsius"
},
"conditions": {
"type": "string",
"description": "Weather conditions description"
}
},
"required": ["location", "temperature", "conditions"],
"additionalProperties": false
}
}
}
}
```

The model will respond with a JSON object that strictly follows your schema:

```json
{
"location": "London",
"temperature": 18,
"conditions": "Partly cloudy with light drizzle"
}
```

## Model Support

Structured outputs are supported by select models.

You can find a list of models that support structured outputs on the [models page]
(https://openrouter.ai/models?order=newest\&supported_parameters=structured_outputs).

* OpenAI models (GPT-4o and later versions) [Docs]


(https://platform.openai.com/docs/guides/structured-outputs)
* All Fireworks provided models [Docs](https://docs.fireworks.ai/structured-responses/structured-
response-formatting#structured-response-modes)

https://openrouter.ai/docs/llms-full.txt 22/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
To ensure your chosen model supports structured outputs:

1. Check the model's supported parameters on the [models page](https://openrouter.ai/models)


2. Set `require_parameters: true` in your provider preferences (see [Provider Routing]
(/docs/features/provider-routing))
3. Include `response_format` and set `type: json_schema` in the required parameters

## Best Practices

1. **Include descriptions**: Add clear descriptions to your schema properties to guide the model

2. **Use strict mode**: Always set `strict: true` to ensure the model follows your schema exactly

## Example Implementation

Here's a complete example using the Fetch API:

<Template
data={{
API_KEY_REF,
MODEL: 'openai/gpt-4'
}}
>
<CodeGroup>
```typescript title="With TypeScript"
const response = await fetch('https://openrouter.ai/api/v1/chat/completions', {
method: 'POST',
headers: {
Authorization: 'Bearer {{API_KEY_REF}}',
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: '{{MODEL}}',
messages: [
{ role: 'user', content: 'What is the weather like in London?' },
],
response_format: {
type: 'json_schema',
json_schema: {
name: 'weather',
strict: true,
schema: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'City or location name',
},
temperature: {
type: 'number',
description: 'Temperature in Celsius',
},
conditions: {
type: 'string',
description: 'Weather conditions description',
},
},
required: ['location', 'temperature', 'conditions'],
additionalProperties: false,
},
},
},
}),
});

const data = await response.json();


const weatherInfo = data.choices[0].message.content;
```

https://openrouter.ai/docs/llms-full.txt 23/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
```python title="With Python"
import requests
import json

response = requests.post(
"https://openrouter.ai/api/v1/chat/completions",
headers={
"Authorization": f"Bearer {{API_KEY_REF}}",
"Content-Type": "application/json",
},

json={
"model": "{{MODEL}}",
"messages": [
{"role": "user", "content": "What is the weather like in London?"},
],
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "weather",
"strict": True,
"schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City or location name",
},
"temperature": {
"type": "number",
"description": "Temperature in Celsius",
},
"conditions": {
"type": "string",
"description": "Weather conditions description",
},
},
"required": ["location", "temperature", "conditions"],
"additionalProperties": False,
},
},
},
},
)

data = response.json()
weather_info = data["choices"][0]["message"]["content"]
```
</CodeGroup>
</Template>

## Streaming with Structured Outputs

Structured outputs are also supported with streaming responses. The model will stream valid
partial JSON that, when complete, forms a valid response matching your schema.

To enable streaming with structured outputs, simply add `stream: true` to your request:

```typescript
{
"stream": true,
"response_format": {
"type": "json_schema",
// ... rest of your schema
}
}
```

## Error Handling

https://openrouter.ai/docs/llms-full.txt 24/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

When using structured outputs, you may encounter these scenarios:

1. **Model doesn't support structured outputs**: The request will fail with an error indicating
lack of support
2. **Invalid schema**: The model will return an error if your JSON Schema is invalid

# Tool & Function Calling

> Use tools (or functions) in your prompts with OpenRouter. Learn how to use tools with OpenAI,
Anthropic, and other models that support tool calling.

Tool calls (also known as function calls) give an LLM access to external tools. The LLM does not
call the tools directly. Instead, it suggests the tool to call. The user then calls the tool
separately and provides the results back to the LLM. Finally, the LLM formats the response into an
answer to the user's original question.

OpenRouter standardizes the tool calling interface across models and providers.

For a primer on how tool calling works in the OpenAI SDK, please see [this article]
(https://platform.openai.com/docs/guides/function-calling?api-mode=chat), or if you prefer to
learn from a full end-to-end example, keep reading.

### Tool Calling Example

Here is Python code that gives LLMs the ability to call an external API -- in this case Project
Gutenberg, to search for books.

First, let's do some basic setup:

<Template
data={{
API_KEY_REF,
MODEL: 'google/gemini-2.0-flash-001'
}}
>
<CodeGroup>
```python
import json, requests
from openai import OpenAI

OPENROUTER_API_KEY = f"{{API_KEY_REF}}"

# You can use any model that supports tool calling


MODEL = "{{MODEL}}"

openai_client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=OPENROUTER_API_KEY,
)

task = "What are the titles of some James Joyce books?"

messages = [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": task,
}
]

```

```typescript
const response = await fetch('https://openrouter.ai/api/v1/chat/completions', {

https://openrouter.ai/docs/llms-full.txt 25/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
method: 'POST',
headers: {
Authorization: `Bearer {{API_KEY_REF}}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: '{{MODEL}}',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{
role: 'user',
content: 'What are the titles of some James Joyce books?',
},
],
}),
});
```
</CodeGroup>
</Template>

### Define the Tool

Next, we define the tool that we want to call. Remember, the tool is going to get *requested* by
the LLM, but the code we are writing here is ultimately responsible for executing the call and
returning the results to the LLM.

<Template
data={{
API_KEY_REF,
MODEL: 'google/gemini-2.0-flash-001'
}}
>
<CodeGroup>
```python
def search_gutenberg_books(search_terms):
search_query = " ".join(search_terms)
url = "https://gutendex.com/books"
response = requests.get(url, params={"search": search_query})

simplified_results = []
for book in response.json().get("results", []):
simplified_results.append({
"id": book.get("id"),
"title": book.get("title"),
"authors": book.get("authors")
})

return simplified_results

tools = [
{
"type": "function",
"function": {
"name": "search_gutenberg_books",
"description": "Search for books in the Project Gutenberg library based on specified
search terms",
"parameters": {
"type": "object",
"properties": {
"search_terms": {
"type": "array",
"items": {
"type": "string"
},
"description": "List of search terms to find books in the Gutenberg library (e.g.
['dickens', 'great'] to search for books by Dickens with 'great' in the title)"
}
},
"required": ["search_terms"]

https://openrouter.ai/docs/llms-full.txt 26/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
}
}
}
]

TOOL_MAPPING = {
"search_gutenberg_books": search_gutenberg_books
}

```

```typescript
async function searchGutenbergBooks(searchTerms: string[]): Promise<Book[]> {
const searchQuery = searchTerms.join(' ');
const url = 'https://gutendex.com/books';
const response = await fetch(`${url}?search=${searchQuery}`);
const data = await response.json();

return data.results.map((book: any) => ({


id: book.id,
title: book.title,
authors: book.authors,
}));
}

const tools = [
{
type: 'function',
function: {
name: 'search_gutenberg_books',
description:
'Search for books in the Project Gutenberg library based on specified search terms',
parameters: {
type: 'object',
properties: {
search_terms: {
type: 'array',
items: {
type: 'string',
},
description:
"List of search terms to find books in the Gutenberg library (e.g. ['dickens',
'great'] to search for books by Dickens with 'great' in the title)",
},
},
required: ['search_terms'],
},
},
},
];

const TOOL_MAPPING = {
searchGutenbergBooks,
};
```
</CodeGroup>
</Template>

Note that the "tool" is just a normal function. We then write a JSON "spec" compatible with the
OpenAI function calling parameter. We'll pass that spec to the LLM so that it knows this tool is
available and how to use it. It will request the tool when needed, along with any arguments. We'll
then marshal the tool call locally, make the function call, and return the results to the LLM.

### Tool use and tool results

Let's make the first OpenRouter API call to the model:

<Template
data={{

https://openrouter.ai/docs/llms-full.txt 27/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
API_KEY_REF,
MODEL: 'google/gemini-2.0-flash-001'
}}
>
<CodeGroup>
```python
request_1 = {
"model": {{MODEL}},
"tools": tools,
"messages": messages
}

response_1 = openai_client.chat.completions.create(**request_1).message
```

```typescript
const response = await fetch('https://openrouter.ai/api/v1/chat/completions', {
method: 'POST',
headers: {
Authorization: `Bearer {{API_KEY_REF}}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: '{{MODEL}}',
tools,
messages,
}),
});
```
</CodeGroup>
</Template>

The LLM responds with a finish reason of tool\_calls, and a tool\_calls array. In a generic LLM
response-handler, you would want to check the finish reason before processing tool calls, but here
we will assume it's the case. Let's keep going, by processing the tool call:

<Template
data={{
API_KEY_REF,
MODEL: 'google/gemini-2.0-flash-001'
}}
>
<CodeGroup>
```python
# Append the response to the messages array so the LLM has the full context
# It's easy to forget this step!
messages.append(response_1)

# Now we process the requested tool calls, and use our book lookup tool
for tool_call in response_1.tool_calls:
'''
In this case we only provided one tool, so we know what function to call.
When providing multiple tools, you can inspect `tool_call.function.name`
to figure out what function you need to call locally.
'''
tool_name = tool_call.function.name
tool_args = json.loads(tool_call.function.arguments)
tool_response = TOOL_MAPPING[tool_name](**tool_args)
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"name": tool_name,
"content": json.dumps(tool_response),
})
```

```typescript
// Append the response to the messages array so the LLM has the full context
// It's easy to forget this step!

https://openrouter.ai/docs/llms-full.txt 28/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
messages.push(response);

// Now we process the requested tool calls, and use our book lookup tool
for (const toolCall of response.toolCalls) {
const toolName = toolCall.function.name;
const toolArgs = JSON.parse(toolCall.function.arguments);
const toolResponse = await TOOL_MAPPING[toolName](toolArgs);
messages.push({
role: 'tool',
toolCallId: toolCall.id,
name: toolName,
content: JSON.stringify(toolResponse),
});
}
```
</CodeGroup>
</Template>

The messages array now has:

1. Our original request


2. The LLM's response (containing a tool call request)
3. The result of the tool call (a json object returned from the Project Gutenberg API)

Now, we can make a second OpenRouter API call, and hopefully get our result!

<Template
data={{
API_KEY_REF,
MODEL: 'google/gemini-2.0-flash-001'
}}
>
<CodeGroup>
```python
request_2 = {
"model": MODEL,
"messages": messages,
"tools": tools
}

response_2 = openai_client.chat.completions.create(**request_2)

print(response_2.choices[0].message.content)
```

```typescript
const response = await fetch('https://openrouter.ai/api/v1/chat/completions', {
method: 'POST',
headers: {
Authorization: `Bearer {{API_KEY_REF}}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: '{{MODEL}}',
messages,
tools,
}),
});

const data = await response.json();


console.log(data.choices[0].message.content);
```
</CodeGroup>
</Template>

The output will be something like:

```text
Here are some books by James Joyce:

https://openrouter.ai/docs/llms-full.txt 29/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

* *Ulysses*
* *Dubliners*
* *A Portrait of the Artist as a Young Man*
* *Chamber Music*
* *Exiles: A Play in Three Acts*
```

We did it! We've successfully used a tool in a prompt.

## A Simple Agentic Loop

In the example above, the calls are made explicitly and sequentially. To handle a wide variety of
user inputs and tool calls, you can use an agentic loop.

Here's an example of a simple agentic loop (using the same `tools` and initial `messages` as
above):

<Template
data={{
API_KEY_REF,
MODEL: 'google/gemini-2.0-flash-001'
}}
>
<CodeGroup>
```python

def call_llm(msgs):
resp = openai_client.chat.completions.create(
model={{MODEL}},
tools=tools,
messages=msgs
)
msgs.append(resp.choices[0].message.dict())
return resp

def get_tool_response(response):
tool_call = response.choices[0].message.tool_calls[0]
tool_name = tool_call.function.name
tool_args = json.loads(tool_call.function.arguments)

# Look up the correct tool locally, and call it with the provided arguments
# Other tools can be added without changing the agentic loop
tool_result = TOOL_MAPPING[tool_name](**tool_args)

return {
"role": "tool",
"tool_call_id": tool_call.id,
"name": tool_name,
"content": tool_result,
}

while True:
resp = call_llm(_messages)

if resp.choices[0].message.tool_calls is not None:


messages.append(get_tool_response(resp))
else:
break

print(messages[-1]['content'])

```

```typescript
async function callLLM(messages: Message[]): Promise<Message> {
const response = await fetch(
'https://openrouter.ai/api/v1/chat/completions',
{

https://openrouter.ai/docs/llms-full.txt 30/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
method: 'POST',
headers: {
Authorization: `Bearer {{API_KEY_REF}}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: '{{MODEL}}',
tools,
messages,
}),
},
);

const data = await response.json();


messages.push(data.choices[0].message);
return data;
}

async function getToolResponse(response: Message): Promise<Message> {


const toolCall = response.toolCalls[0];
const toolName = toolCall.function.name;
const toolArgs = JSON.parse(toolCall.function.arguments);

// Look up the correct tool locally, and call it with the provided arguments
// Other tools can be added without changing the agentic loop
const toolResult = await TOOL_MAPPING[toolName](toolArgs);

return {
role: 'tool',
toolCallId: toolCall.id,
name: toolName,
content: toolResult,
};
}

while (true) {
const response = await callLLM(messages);

if (response.toolCalls) {
messages.push(await getToolResponse(response));
} else {
break;
}
}

console.log(messages[messages.length - 1].content);
```
</CodeGroup>
</Template>

# Images & PDFs

> Sending images and PDFs to the OpenRouter API.

OpenRouter supports sending images and PDFs via the API. This guide will show you how to work with
both file types using our API.

Both images and PDFs also work in the chat room.

<Tip>
You can send both PDF and images in the same request.
</Tip>

## Image Inputs

Requests with images, to multimodel models, are available via the `/api/v1/chat/completions` API
with a multi-part `messages` parameter. The `image_url` can either be a URL or a base64-encoded
image. Note that multiple images can be sent in separate content array entries. The number of

https://openrouter.ai/docs/llms-full.txt 31/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
images you can send in a single request varies per provider and per model. Due to how the content
is parsed, we recommend sending the text prompt first, then the images. If the images must come
first, we recommend putting it in the system prompt.

### Using Image URLs

Here's how to send an image using a URL:

<Template
data={{
API_KEY_REF,
MODEL: 'google/gemini-2.0-flash-001'
}}
>
<CodeGroup>
```python
import requests
import json

url = "https://openrouter.ai/api/v1/chat/completions"
headers = {
"Authorization": f"Bearer {API_KEY_REF}",
"Content-Type": "application/json"
}

messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What's in this image?"
},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-
wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
}
}
]
}
]

payload = {
"model": "{{MODEL}}",
"messages": messages
}

response = requests.post(url, headers=headers, json=payload)


print(response.json())
```

```typescript
const response = await fetch('https://openrouter.ai/api/v1/chat/completions', {
method: 'POST',
headers: {
Authorization: `Bearer ${API_KEY_REF}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: '{{MODEL}}',
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: "What's in this image?",

https://openrouter.ai/docs/llms-full.txt 32/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
},
{
type: 'image_url',
image_url: {
url: 'https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-
madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg',
},
},
],
},
],
}),
});

const data = await response.json();


console.log(data);
```
</CodeGroup>
</Template>

### Using Base64 Encoded Images

For locally stored images, you can send them using base64 encoding. Here's how to do it:

<Template
data={{
API_KEY_REF,
MODEL: 'google/gemini-2.0-flash-001'
}}
>
<CodeGroup>
```python
import requests
import json
import base64
from pathlib import Path

def encode_image_to_base64(image_path):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode('utf-8')

url = "https://openrouter.ai/api/v1/chat/completions"
headers = {
"Authorization": f"Bearer {API_KEY_REF}",
"Content-Type": "application/json"
}

# Read and encode the image


image_path = "path/to/your/image.jpg"
base64_image = encode_image_to_base64(image_path)
data_url = f"data:image/jpeg;base64,{base64_image}"

messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What's in this image?"
},
{
"type": "image_url",
"image_url": {
"url": data_url
}
}
]
}
]

https://openrouter.ai/docs/llms-full.txt 33/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

payload = {
"model": "{{MODEL}}",
"messages": messages
}

response = requests.post(url, headers=headers, json=payload)


print(response.json())
```

```typescript
async function encodeImageToBase64(imagePath: string): Promise<string> {
const imageBuffer = await fs.promises.readFile(imagePath);
const base64Image = imageBuffer.toString('base64');
return `data:image/jpeg;base64,${base64Image}`;
}

// Read and encode the image


const imagePath = 'path/to/your/image.jpg';
const base64Image = await encodeImageToBase64(imagePath);

const response = await fetch('https://openrouter.ai/api/v1/chat/completions', {


method: 'POST',
headers: {
Authorization: `Bearer ${API_KEY_REF}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: '{{MODEL}}',
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: "What's in this image?",
},
{
type: 'image_url',
image_url: {
url: base64Image,
},
},
],
},
],
}),
});

const data = await response.json();


console.log(data);
```
</CodeGroup>
</Template>

Supported image content types are:

* `image/png`
* `image/jpeg`
* `image/webp`

## PDF Support

OpenRouter supports PDF processing through the `/api/v1/chat/completions` API. PDFs can be sent as
base64-encoded data URLs in the messages array, via the file content type. This feature works on
**any** model on OpenRouter.

<Info>
When a model supports file input natively, the PDF is passed directly to the

https://openrouter.ai/docs/llms-full.txt 34/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
model. When the model does not support file input natively, OpenRouter will
parse the file and pass the parsed results to the requested model.
</Info>

Note that multiple PDFs can be sent in separate content array entries. The number of PDFs you can
send in a single request varies per provider and per model. Due to how the content is parsed, we
recommend sending the text prompt first, then the PDF. If the PDF must come first, we recommend
putting it in the system prompt.

### Processing PDFs

Here's how to send and process a PDF:

<Template
data={{
API_KEY_REF,
MODEL: 'google/gemma-3-27b-it',
ENGINE: PDFParserEngine.PDFText,
DEFAULT_PDF_ENGINE,
}}
>
<CodeGroup>
```python
import requests
import json
import base64
from pathlib import Path

def encode_pdf_to_base64(pdf_path):
with open(pdf_path, "rb") as pdf_file:
return base64.b64encode(pdf_file.read()).decode('utf-8')

url = "https://openrouter.ai/api/v1/chat/completions"
headers = {
"Authorization": f"Bearer {API_KEY_REF}",
"Content-Type": "application/json"
}

# Read and encode the PDF


pdf_path = "path/to/your/document.pdf"
base64_pdf = encode_pdf_to_base64(pdf_path)
data_url = f"data:application/pdf;base64,{base64_pdf}"

messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What are the main points in this document?"
},
{
"type": "file",
"file": {
"filename": "document.pdf",
"file_data": data_url
}
},
]
}
]

# Optional: Configure PDF processing engine


# PDF parsing will still work even if the plugin is not explicitly set
plugins = [
{
"id": "file-parser",
"pdf": {
"engine": "{{ENGINE}}" # defaults to "{{DEFAULT_PDF_ENGINE}}". See Pricing below

https://openrouter.ai/docs/llms-full.txt 35/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
}
}
]

payload = {
"model": "{{MODEL}}",
"messages": messages,
"plugins": plugins
}

response = requests.post(url, headers=headers, json=payload)


print(response.json())
```

```typescript
async function encodePDFToBase64(pdfPath: string): Promise<string> {
const pdfBuffer = await fs.promises.readFile(pdfPath);
const base64PDF = pdfBuffer.toString('base64');
return `data:application/pdf;base64,${base64PDF}`;
}

// Read and encode the PDF


const pdfPath = 'path/to/your/document.pdf';
const base64PDF = await encodePDFToBase64(pdfPath);

const response = await fetch('https://openrouter.ai/api/v1/chat/completions', {


method: 'POST',
headers: {
Authorization: `Bearer ${API_KEY_REF}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: '{{MODEL}}',
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: 'What are the main points in this document?',
},
{
type: 'file',
file: {
filename: 'document.pdf',
file_data: base64PDF,
},
},
],
},
],
// Optional: Configure PDF processing engine
// PDF parsing will still work even if the plugin is not explicitly set
plugins: [
{
id: 'file-parser',
pdf: {
engine: '{{ENGINE}}', // defaults to "{{DEFAULT_PDF_ENGINE}}". See Pricing below
},
},
],
}),
});

const data = await response.json();


console.log(data);
```
</CodeGroup>
</Template>

https://openrouter.ai/docs/llms-full.txt 36/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

### Pricing

OpenRouter provides several PDF processing engines:

1. <code>"{PDFParserEngine.MistralOCR}"</code>: Best for scanned documents or


PDFs with images (\${MISTRAL_OCR_COST.toString()} per 1,000 pages).
2. <code>"{PDFParserEngine.PDFText}"</code>: Best for well-structured PDFs with
clear text content (Free).
3. <code>"{PDFParserEngine.Native}"</code>: Only available for models that
support file input natively (charged as input tokens).

If you don't explicitly specify an engine, OpenRouter will default first to the model's native
file processing capabilities, and if that's not available, we will use the <code>"
{DEFAULT_PDF_ENGINE}"</code> engine.

To select an engine, use the plugin configuration:

<Template
data={{
API_KEY_REF,
ENGINE: PDFParserEngine.MistralOCR,
}}
>
<CodeGroup>
```python
plugins = [
{
"id": "file-parser",
"pdf": {
"engine": "{{ENGINE}}"
}
}
]
```

```typescript
{
plugins: [
{
id: 'file-parser',
pdf: {
engine: '{{ENGINE}}',
},
},
],
}
```
</CodeGroup>
</Template>

### Skip Parsing Costs

When you send a PDF to the API, the response may include file annotations in the assistant's
message. These annotations contain structured information about the PDF document that was parsed.
By sending these annotations back in subsequent requests, you can avoid re-parsing the same PDF
document multiple times, which saves both processing time and costs.

Here's how to reuse file annotations:

<Template
data={{
API_KEY_REF,
MODEL: 'google/gemma-3-27b-it'
}}
>
<CodeGroup>
```python
import requests

https://openrouter.ai/docs/llms-full.txt 37/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
import json
import base64
from pathlib import Path

# First, encode and send the PDF


def encode_pdf_to_base64(pdf_path):
with open(pdf_path, "rb") as pdf_file:
return base64.b64encode(pdf_file.read()).decode('utf-8')

url = "https://openrouter.ai/api/v1/chat/completions"
headers = {
"Authorization": f"Bearer {API_KEY_REF}",
"Content-Type": "application/json"
}

# Read and encode the PDF


pdf_path = "path/to/your/document.pdf"
base64_pdf = encode_pdf_to_base64(pdf_path)
data_url = f"data:application/pdf;base64,{base64_pdf}"

# Initial request with the PDF


messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What are the main points in this document?"
},
{
"type": "file",
"file": {
"filename": "document.pdf",
"file_data": data_url
}
},
]
}
]

payload = {
"model": "{{MODEL}}",
"messages": messages
}

response = requests.post(url, headers=headers, json=payload)


response_data = response.json()

# Store the annotations from the response


file_annotations = None
if response_data.get("choices") and len(response_data["choices"]) > 0:
if "annotations" in response_data["choices"][0]["message"]:
file_annotations = response_data["choices"][0]["message"]["annotations"]

# Follow-up request using the annotations (without sending the PDF again)
if file_annotations:
follow_up_messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What are the main points in this document?"
},
{
"type": "file",
"file": {
"filename": "document.pdf",
"file_data": data_url

https://openrouter.ai/docs/llms-full.txt 38/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
}
}
]
},
{
"role": "assistant",
"content": "The document contains information about...",
"annotations": file_annotations
},
{
"role": "user",
"content": "Can you elaborate on the second point?"
}
]

follow_up_payload = {
"model": "{{MODEL}}",
"messages": follow_up_messages
}

follow_up_response = requests.post(url, headers=headers, json=follow_up_payload)


print(follow_up_response.json())
```

```typescript
import fs from 'fs/promises';
import { fetch } from 'node-fetch';

async function encodePDFToBase64(pdfPath: string): Promise<string> {


const pdfBuffer = await fs.readFile(pdfPath);
const base64PDF = pdfBuffer.toString('base64');
return `data:application/pdf;base64,${base64PDF}`;
}

// Initial request with the PDF


async function processDocument() {
// Read and encode the PDF
const pdfPath = 'path/to/your/document.pdf';
const base64PDF = await encodePDFToBase64(pdfPath);

const initialResponse = await fetch(


'https://openrouter.ai/api/v1/chat/completions',
{
method: 'POST',
headers: {
Authorization: `Bearer ${API_KEY_REF}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: '{{MODEL}}',
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: 'What are the main points in this document?',
},
{
type: 'file',
file: {
filename: 'document.pdf',
file_data: base64PDF,
},
},
],
},
],
}),

https://openrouter.ai/docs/llms-full.txt 39/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
},
);

const initialData = await initialResponse.json();

// Store the annotations from the response


let fileAnnotations = null;
if (initialData.choices && initialData.choices.length > 0) {
if (initialData.choices[0].message.annotations) {
fileAnnotations = initialData.choices[0].message.annotations;
}
}

// Follow-up request using the annotations (without sending the PDF again)
if (fileAnnotations) {
const followUpResponse = await fetch(
'https://openrouter.ai/api/v1/chat/completions',
{
method: 'POST',
headers: {
Authorization: `Bearer ${API_KEY_REF}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: '{{MODEL}}',
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: 'What are the main points in this document?',
},
{
type: 'file',
file: {
filename: 'document.pdf',
file_data: base64PDF,
},
},
],
},
{
role: 'assistant',
content: 'The document contains information about...',
annotations: fileAnnotations,
},
{
role: 'user',
content: 'Can you elaborate on the second point?',
},
],
}),
},
);

const followUpData = await followUpResponse.json();


console.log(followUpData);
}
}

processDocument();
```
</CodeGroup>
</Template>

<Info>
When you include the file annotations from a previous response in your
subsequent requests, OpenRouter will use this pre-parsed information instead

https://openrouter.ai/docs/llms-full.txt 40/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
of re-parsing the PDF, which saves processing time and costs. This is
especially beneficial for large documents or when using the `mistral-ocr`
engine which incurs additional costs.
</Info>

### Response Format

The API will return a response in the following format:

```json
{
"id": "gen-1234567890",
"provider": "DeepInfra",
"model": "google/gemma-3-27b-it",
"object": "chat.completion",
"created": 1234567890,
"choices": [
{
"message": {
"role": "assistant",
"content": "The document discusses..."
}
}
],
"usage": {
"prompt_tokens": 1000,
"completion_tokens": 100,
"total_tokens": 1100
}
}
```

# Message Transforms

> Transform and optimize messages before sending them to AI models. Learn about middle-out
compression and context window optimization with OpenRouter.

To help with prompts that exceed the maximum context size of a model, OpenRouter supports a custom
parameter called `transforms`:

```typescript
{
transforms: ["middle-out"], // Compress prompts that are > context size.
messages: [...],
model // Works with any model
}
```

This can be useful for situations where perfect recall is not required. The transform works by
removing or truncating messages from the middle of the prompt, until the prompt fits within the
model's context window.

In some cases, the issue is not the token context length, but the actual number of messages. The
transform addresses this as well: For instance, Anthropic's Claude models enforce a maximum of
{anthropicMaxMessagesCount} messages. When this limit is exceeded with middle-out enabled, the
transform will keep half of the messages from the start and half from the end of the conversation.

When middle-out compression is enabled, OpenRouter will first try to find models whose context
length is at least half of your total required tokens (input + completion). For example, if your
prompt requires 10,000 tokens total, models with at least 5,000 context length will be considered.
If no models meet this criteria, OpenRouter will fall back to using the model with the highest
available context length.

The compression will then attempt to fit your content within the chosen model's context window by
removing or truncating content from the middle of the prompt. If middle-out compression is
disabled and your total tokens exceed the model's context length, the request will fail with an
error message suggesting you either reduce the length or enable middle-out compression.

https://openrouter.ai/docs/llms-full.txt 41/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
<Note>
[All OpenRouter endpoints](/models) with 8k (8,192 tokens) or less context
length will default to using `middle-out`. To disable this, set `transforms: []` in the
request body.
</Note>

The middle of the prompt is compressed because [LLMs pay less attention]
(https://arxiv.org/abs/2307.03172) to the middle of sequences.

# Uptime Optimization

> Learn how OpenRouter maximizes AI model uptime through real-time monitoring, intelligent
routing, and automatic fallbacks across multiple providers.

OpenRouter continuously monitors the health and availability of AI providers to ensure maximum
uptime for your applications. We track response times, error rates, and availability across all
providers in real-time, and route based on this feedback.

## How It Works

OpenRouter tracks response times, error rates, and availability across all providers in real-time.
This data helps us make intelligent routing decisions and provides transparency about service
reliability.

## Uptime Example: Claude 3.5 Sonnet

<UptimeChart permaslug="anthropic/claude-3.5-sonnet" />

## Uptime Example: Llama 3.3 70B Instruct

<UptimeChart permaslug="meta-llama/llama-3.3-70b-instruct" />

## Customizing Provider Selection

While our smart routing helps maintain high availability, you can also customize provider
selection using request parameters. This gives you control over which providers handle your
requests while still benefiting from automatic fallback when needed.

Learn more about customizing provider selection in our [Provider Routing documentation]
(/docs/features/provider-routing).

# Web Search

> Enable real-time web search capabilities in your AI model responses. Add factual, up-to-date
information to any model's output with OpenRouter's web search feature.

You can incorporate relevant web search results for *any* model on OpenRouter by activating and
customizing the `web` plugin, or by appending `:online` to the model slug:

```json
{
"model": "openai/gpt-4o:online"
}
```

This is a shortcut for using the `web` plugin, and is exactly equivalent to:

```json
{
"model": "openrouter/auto",
"plugins": [{ "id": "web" }]
}
```

The web search plugin is powered by [Exa](https://exa.ai) and uses their ["auto"]
(https://docs.exa.ai/reference/how-exa-search-works#combining-neural-and-keyword-the-best-of-both-
worlds-through-exa-auto-search) method (a combination of keyword search and embeddings-based web

https://openrouter.ai/docs/llms-full.txt 42/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
search) to find the most relevant results and augment/ground your prompt.

## Parsing web search results

Web search results for all models (including native-only models like Perplexity and OpenAI Online)
are available in the API and standardized by OpenRouterto follow the same annotation schema in the
[OpenAI Chat Completion Message type](https://platform.openai.com/docs/api-reference/chat/object):

```json
{
"message": {
"role": "assistant",
"content": "Here's the latest news I found: ...",
"annotations": [
{
"type": "url_citation",
"url_citation": {
"url": "https://www.example.com/web-search-result",
"title": "Title of the web search result",
"content": "Content of the web search result", // Added by OpenRouter if available
"start_index": 100, // The index of the first character of the URL citation in the
message.
"end_index": 200 // The index of the last character of the URL citation in the message.
}
}
]
}
}
```

## Customizing the Web Plugin

The maximum results allowed by the web plugin and the prompt used to attach them to your message
stream can be customized:

```json
{
"model": "openai/gpt-4o:online",
"plugins": [
{
"id": "web",
"max_results": 1, // Defaults to 5
"search_prompt": "Some relevant web results:" // See default below
}
]
}
```

By default, the web plugin uses the following search prompt, using the current date:

```
A web search was conducted on `date`. Incorporate the following web search results into your
response.

IMPORTANT: Cite them using markdown links named using the domain of the source.
Example: [nytimes.com](https://nytimes.com/some-page).
```

## Pricing

The web plugin uses your OpenRouter credits and charges *\$4 per 1000 results*. By default,
`max_results` set to 5, this comes out to a maximum of \$0.02 per request, in addition to the LLM
usage for the search result prompt tokens.

## Non-plugin Web Search

Some models have built-in web search. These models charge a fee based on the search context size,
which determines how much search data is retrieved and processed for a query.

https://openrouter.ai/docs/llms-full.txt 43/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
### Search Context Size Thresholds

Search context can be 'low', 'medium', or 'high' and determines how much search context is
retrieved for a query:

* **Low**: Minimal search context, suitable for basic queries


* **Medium**: Moderate search context, good for general queries
* **High**: Extensive search context, ideal for detailed research

### Specifying Search Context Size

You can specify the search context size in your API request using the `web_search_options`
parameter:

```json
{
"model": "openai/gpt-4.1",
"messages": [
{
"role": "user",
"content": "What are the latest developments in quantum computing?"
}
],
"web_search_options": {
"search_context_size": "high"
}
}
```

### OpenAI Model Pricing

For GPT-4, GPT-4o, and GPT-4 Omni Models:

| Search Context Size | Price per 1000 Requests |


| ------------------- | ----------------------- |
| Low | \$30.00 |
| Medium | \$35.00 |
| High | \$50.00 |

For GPT-4 Mini, GPT-4o Mini, and GPT-4 Omni Mini Models:

| Search Context Size | Price per 1000 Requests |


| ------------------- | ----------------------- |
| Low | \$25.00 |
| Medium | \$27.50 |
| High | \$30.00 |

### Perplexity Model Pricing

For Sonar and SonarReasoning:

| Search Context Size | Price per 1000 Requests |


| ------------------- | ----------------------- |
| Low | \$5.00 |
| Medium | \$8.00 |
| High | \$12.00 |

For SonarPro and SonarReasoningPro:

| Search Context Size | Price per 1000 Requests |


| ------------------- | ----------------------- |
| Low | \$6.00 |
| Medium | \$10.00 |
| High | \$14.00 |

<Note title="Pricing Documentation">


For more detailed information about pricing models, refer to the official documentation:

* [OpenAI Pricing](https://platform.openai.com/docs/pricing#web-search)

https://openrouter.ai/docs/llms-full.txt 44/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
* [Perplexity Pricing](https://docs.perplexity.ai/guides/pricing)
</Note>

# Zero Completion Insurance

> Learn how OpenRouter protects users from being charged for failed or empty AI responses with
zero completion insurance.

OpenRouter provides zero completion insurance to protect users from being charged for failed or
empty responses. When a response contains no output tokens and either has a blank finish reason or
an error, you will not be charged for the request, even if the underlying provider charges for
prompt processing.

<Note>
Zero completion insurance is automatically enabled for all accounts and requires no
configuration.
</Note>

## How It Works

Zero completion insurance automatically applies to all requests across all models and providers.
When a response meets either of these conditions, no credits will be deducted from your account:

* The response has zero completion tokens AND a blank/null finish reason
* The response has an error finish reason

## Viewing Protected Requests

On your activity page, requests that were protected by zero completion insurance will show zero
credits deducted. This applies even in cases where OpenRouter may have been charged by the
provider for prompt processing.

# Provisioning API Keys

> Manage OpenRouter API keys programmatically through dedicated management endpoints. Create,
read, update, and delete API keys for automated key distribution and control.

OpenRouter provides endpoints to programmatically manage your API keys, enabling key creation and
management for applications that need to distribute or rotate keys automatically.

## Creating a Provisioning API Key

To use the key management API, you first need to create a Provisioning API key:

1. Go to the [Provisioning API Keys page](https://openrouter.ai/settings/provisioning-keys)


2. Click "Create New Key"
3. Complete the key creation process

Provisioning keys cannot be used to make API calls to OpenRouter's completion endpoints - they are
exclusively for key management operations.

## Use Cases

Common scenarios for programmatic key management include:

* **SaaS Applications**: Automatically create unique API keys for each customer instance
* **Key Rotation**: Regularly rotate API keys for security compliance
* **Usage Monitoring**: Track key usage and automatically disable keys that exceed limits

## Example Usage

All key management endpoints are under `/api/v1/keys` and require a Provisioning API key in the
Authorization header.

<CodeGroup>
```python title="Python"
import requests

https://openrouter.ai/docs/llms-full.txt 45/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

PROVISIONING_API_KEY = "your-provisioning-key"
BASE_URL = "https://openrouter.ai/api/v1/keys"

# List the most recent 100 API keys


response = requests.get(
BASE_URL,
headers={
"Authorization": f"Bearer {PROVISIONING_API_KEY}",
"Content-Type": "application/json"
}
)

# You can paginate using the offset parameter


response = requests.get(
f"{BASE_URL}?offset=100",
headers={
"Authorization": f"Bearer {PROVISIONING_API_KEY}",
"Content-Type": "application/json"
}
)

# Create a new API key


response = requests.post(
f"{BASE_URL}/",
headers={
"Authorization": f"Bearer {PROVISIONING_API_KEY}",
"Content-Type": "application/json"
},
json={
"name": "Customer Instance Key",
"label": "customer-123",
"limit": 1000 # Optional credit limit
}
)

# Get a specific key


key_hash = "<YOUR_KEY_HASH>"
response = requests.get(
f"{BASE_URL}/{key_hash}",
headers={
"Authorization": f"Bearer {PROVISIONING_API_KEY}",
"Content-Type": "application/json"
}
)

# Update a key
response = requests.patch(
f"{BASE_URL}/{key_hash}",
headers={
"Authorization": f"Bearer {PROVISIONING_API_KEY}",
"Content-Type": "application/json"
},
json={
"name": "Updated Key Name",
"disabled": True # Disable the key
}
)

# Delete a key
response = requests.delete(
f"{BASE_URL}/{key_hash}",
headers={
"Authorization": f"Bearer {PROVISIONING_API_KEY}",
"Content-Type": "application/json"
}
)
```

https://openrouter.ai/docs/llms-full.txt 46/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
```typescript title="TypeScript"
const PROVISIONING_API_KEY = 'your-provisioning-key';
const BASE_URL = 'https://openrouter.ai/api/v1/keys';

// List the most recent 100 API keys


const listKeys = await fetch(BASE_URL, {
headers: {
Authorization: `Bearer ${PROVISIONING_API_KEY}`,
'Content-Type': 'application/json',
},
});

// You can paginate using the `offset` query parameter


const listKeys = await fetch(`${BASE_URL}?offset=100`, {
headers: {
Authorization: `Bearer ${PROVISIONING_API_KEY}`,
'Content-Type': 'application/json',
},
});

// Create a new API key


const createKey = await fetch(`${BASE_URL}`, {
method: 'POST',
headers: {
Authorization: `Bearer ${PROVISIONING_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
name: 'Customer Instance Key',
label: 'customer-123',
limit: 1000, // Optional credit limit
}),
});

// Get a specific key


const keyHash = '<YOUR_KEY_HASH>';
const getKey = await fetch(`${BASE_URL}/${keyHash}`, {
headers: {
Authorization: `Bearer ${PROVISIONING_API_KEY}`,
'Content-Type': 'application/json',
},
});

// Update a key
const updateKey = await fetch(`${BASE_URL}/${keyHash}`, {
method: 'PATCH',
headers: {
Authorization: `Bearer ${PROVISIONING_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
name: 'Updated Key Name',
disabled: true, // Disable the key
}),
});

// Delete a key
const deleteKey = await fetch(`${BASE_URL}/${keyHash}`, {
method: 'DELETE',
headers: {
Authorization: `Bearer ${PROVISIONING_API_KEY}`,
'Content-Type': 'application/json',
},
});
```
</CodeGroup>

## Response Format

https://openrouter.ai/docs/llms-full.txt 47/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
API responses return JSON objects containing key information:

```json
{
"data": [
{
"created_at": "2025-02-19T20:52:27.363244+00:00",
"updated_at": "2025-02-19T21:24:11.708154+00:00",
"hash": "<YOUR_KEY_HASH>",
"label": "sk-or-v1-customkey",
"name": "Customer Key",
"disabled": false,
"limit": 10,
"usage": 0
}
]
}
```

When creating a new key, the response will include the key string itself.

# API Reference

> Comprehensive guide to OpenRouter's API. Learn about request/response schemas, authentication,
parameters, and integration with multiple AI model providers.

OpenRouter's request and response schemas are very similar to the OpenAI Chat API, with a few
small differences. At a high level, **OpenRouter normalizes the schema across models and
providers** so you only need to learn one.

## Requests

### Completions Request Format

Here is the request schema as a TypeScript type. This will be the body of your `POST` request to
the `/api/v1/chat/completions` endpoint (see the [quick start](/docs/quick-start) above for an
example).

For a complete list of parameters, see the [Parameters](/docs/api-reference/parameters).

<CodeGroup>
```typescript title="Request Schema"
// Definitions of subtypes are below
type Request = {
// Either "messages" or "prompt" is required
messages?: Message[];
prompt?: string;

// If "model" is unspecified, uses the user's default


model?: string; // See "Supported Models" section

// Allows to force the model to produce specific output format.


// See models page and note on this docs page for which models support it.
response_format?: { type: 'json_object' };

stop?: string | string[];


stream?: boolean; // Enable streaming

// See LLM Parameters (openrouter.ai/docs/api-reference/parameters)


max_tokens?: number; // Range: [1, context_length)
temperature?: number; // Range: [0, 2]

// Tool calling
// Will be passed down as-is for providers implementing OpenAI's interface.
// For providers with custom interfaces, we transform and map the properties.
// Otherwise, we transform the tools into a YAML template. The model responds with an
assistant message.
// See models supporting tool calling: openrouter.ai/models?supported_parameters=tools

https://openrouter.ai/docs/llms-full.txt 48/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
tools?: Tool[];
tool_choice?: ToolChoice;

// Advanced optional parameters


seed?: number; // Integer only
top_p?: number; // Range: (0, 1]
top_k?: number; // Range: [1, Infinity) Not available for OpenAI models
frequency_penalty?: number; // Range: [-2, 2]
presence_penalty?: number; // Range: [-2, 2]
repetition_penalty?: number; // Range: (0, 2]
logit_bias?: { [key: number]: number };
top_logprobs: number; // Integer only
min_p?: number; // Range: [0, 1]
top_a?: number; // Range: [0, 1]

// Reduce latency by providing the model with a predicted output


// https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs
prediction?: { type: 'content'; content: string };

// OpenRouter-only parameters
// See "Prompt Transforms" section: openrouter.ai/docs/transforms
transforms?: string[];
// See "Model Routing" section: openrouter.ai/docs/model-routing
models?: string[];
route?: 'fallback';
// See "Provider Routing" section: openrouter.ai/docs/provider-routing
provider?: ProviderPreferences;
};

// Subtypes:

type TextContent = {
type: 'text';
text: string;
};

type ImageContentPart = {
type: 'image_url';
image_url: {
url: string; // URL or base64 encoded image data
detail?: string; // Optional, defaults to "auto"
};
};

type ContentPart = TextContent | ImageContentPart;

type Message =
| {
role: 'user' | 'assistant' | 'system';
// ContentParts are only for the "user" role:
content: string | ContentPart[];
// If "name" is included, it will be prepended like this
// for non-OpenAI models: `{name}: {content}`
name?: string;
}
| {
role: 'tool';
content: string;
tool_call_id: string;
name?: string;
};

type FunctionDescription = {
description?: string;
name: string;
parameters: object; // JSON Schema object
};

type Tool = {

https://openrouter.ai/docs/llms-full.txt 49/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
type: 'function';
function: FunctionDescription;
};

type ToolChoice =
| 'none'
| 'auto'
| {
type: 'function';
function: {
name: string;
};
};
```
</CodeGroup>

The `response_format` parameter ensures you receive a structured response from the LLM. The
parameter is only supported by OpenAI models, Nitro models, and some others - check the providers
on the model page on openrouter.ai/models to see if it's supported, and set `require_parameters`
to true in your Provider Preferences. See [Provider Routing](/docs/features/provider-routing)

### Headers

OpenRouter allows you to specify some optional headers to identify your app and make it
discoverable to users on our site.

* `HTTP-Referer`: Identifies your app on openrouter.ai


* `X-Title`: Sets/modifies your app's title

<CodeGroup>
```typescript title="TypeScript"
fetch('https://openrouter.ai/api/v1/chat/completions', {
method: 'POST',
headers: {
Authorization: 'Bearer <OPENROUTER_API_KEY>',
'HTTP-Referer': '<YOUR_SITE_URL>', // Optional. Site URL for rankings on openrouter.ai.
'X-Title': '<YOUR_SITE_NAME>', // Optional. Site title for rankings on openrouter.ai.
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'openai/gpt-4o',
messages: [
{
role: 'user',
content: 'What is the meaning of life?',
},
],
}),
});
```
</CodeGroup>

<Info title="Model routing">


If the `model` parameter is omitted, the user or payer's default is used.
Otherwise, remember to select a value for `model` from the [supported
models](/models) or [API](/api/v1/models), and include the organization
prefix. OpenRouter will select the least expensive and best GPUs available to
serve the request, and fall back to other providers or GPUs if it receives a
5xx response code or if you are rate-limited.
</Info>

<Info title="Streaming">
[Server-Sent Events
(SSE)](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-
sent_events#event_stream_format)
are supported as well, to enable streaming *for all models*. Simply send
`stream: true` in your request body. The SSE stream will occasionally contain
a "comment" payload, which you should ignore (noted below).
</Info>

https://openrouter.ai/docs/llms-full.txt 50/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

<Info title="Non-standard parameters">


If the chosen model doesn't support a request parameter (such as `logit_bias`
in non-OpenAI models, or `top_k` for OpenAI), then the parameter is ignored.
The rest are forwarded to the underlying model API.
</Info>

### Assistant Prefill

OpenRouter supports asking models to complete a partial response. This can be useful for guiding
models to respond in a certain way.

To use this features, simply include a message with `role: "assistant"` at the end of your
`messages` array.

<CodeGroup>
```typescript title="TypeScript"
fetch('https://openrouter.ai/api/v1/chat/completions', {
method: 'POST',
headers: {
Authorization: 'Bearer <OPENROUTER_API_KEY>',
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'openai/gpt-4o',
messages: [
{ role: 'user', content: 'What is the meaning of life?' },
{ role: 'assistant', content: "I'm not sure, but my best guess is" },
],
}),
});
```
</CodeGroup>

## Responses

### CompletionsResponse Format

OpenRouter normalizes the schema across models and providers to comply with the [OpenAI Chat API]
(https://platform.openai.com/docs/api-reference/chat).

This means that `choices` is always an array, even if the model only returns one completion. Each
choice will contain a `delta` property if a stream was requested and a `message` property
otherwise. This makes it easier to use the same code for all models.

Here's the response schema as a TypeScript type:

```typescript TypeScript
// Definitions of subtypes are below
type Response = {
id: string;
// Depending on whether you set "stream" to "true" and
// whether you passed in "messages" or a "prompt", you
// will get a different output shape
choices: (NonStreamingChoice | StreamingChoice | NonChatChoice)[];
created: number; // Unix timestamp
model: string;
object: 'chat.completion' | 'chat.completion.chunk';

system_fingerprint?: string; // Only present if the provider supports it

// Usage data is always returned for non-streaming.


// When streaming, you will get one usage object at
// the end accompanied by an empty choices array.
usage?: ResponseUsage;
};
```

```typescript

https://openrouter.ai/docs/llms-full.txt 51/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
// If the provider returns usage, we pass it down
// as-is. Otherwise, we count using the GPT-4 tokenizer.

type ResponseUsage = {
/** Including images and tools if any */
prompt_tokens: number;
/** The tokens generated */
completion_tokens: number;
/** Sum of the above two fields */
total_tokens: number;
};
```

```typescript
// Subtypes:
type NonChatChoice = {
finish_reason: string | null;
text: string;
error?: ErrorResponse;
};

type NonStreamingChoice = {
finish_reason: string | null;
native_finish_reason: string | null;
message: {
content: string | null;
role: string;
tool_calls?: ToolCall[];
};
error?: ErrorResponse;
};

type StreamingChoice = {
finish_reason: string | null;
native_finish_reason: string | null;
delta: {
content: string | null;
role?: string;
tool_calls?: ToolCall[];
};
error?: ErrorResponse;
};

type ErrorResponse = {
code: number; // See "Error Handling" section
message: string;
metadata?: Record<string, unknown>; // Contains additional error information such as provider
details, the raw error message, etc.
};

type ToolCall = {
id: string;
type: 'function';
function: FunctionCall;
};
```

Here's an example:

```json
{
"id": "gen-xxxxxxxxxxxxxx",
"choices": [
{
"finish_reason": "stop", // Normalized finish_reason
"native_finish_reason": "stop", // The raw finish_reason from the provider
"message": {
// will be "delta" if streaming
"role": "assistant",

https://openrouter.ai/docs/llms-full.txt 52/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
"content": "Hello there!"
}
}
],
"usage": {
"prompt_tokens": 0,
"completion_tokens": 4,
"total_tokens": 4
},
"model": "openai/gpt-3.5-turbo" // Could also be "anthropic/claude-2.1", etc, depending on the
"model" that ends up being used
}
```

### Finish Reason

OpenRouter normalizes each model's `finish_reason` to one of the following values: `tool_calls`,
`stop`, `length`, `content_filter`, `error`.

Some models and providers may have additional finish reasons. The raw finish\_reason string
returned by the model is available via the `native_finish_reason` property.

### Querying Cost and Stats

The token counts that are returned in the completions API response are **not** counted via the
model's native tokenizer. Instead it uses a normalized, model-agnostic count (accomplished via the
GPT4o tokenizer). This is because some providers do not reliably return native token counts. This
behavior is becoming more rare, however, and we may add native token counts to the response object
in the future.

Credit usage and model pricing are based on the **native** token counts (not the 'normalized'
token counts returned in the API response).

For precise token accounting using the model's native tokenizer, you can retrieve the full
generation information via the `/api/v1/generation` endpoint.

You can use the returned `id` to query for the generation stats (including token counts and cost)
after the request is complete. This is how you can get the cost and tokens for *all models and
requests*, streaming and non-streaming.

<CodeGroup>
```typescript title="Query Generation Stats"
const generation = await fetch(
'https://openrouter.ai/api/v1/generation?id=$GENERATION_ID',
{ headers },
);

const stats = await generation.json();


```
</CodeGroup>

Please see the [Generation](/docs/api-reference/get-a-generation) API reference for the full


response shape.

Note that token counts are also available in the `usage` field of the response body for non-
streaming completions.

# Streaming

> Learn how to implement streaming responses with OpenRouter's API. Complete guide to Server-Sent
Events (SSE) and real-time model outputs.

The OpenRouter API allows streaming responses from *any model*. This is useful for building chat
interfaces or other applications where the UI should update as the model generates the response.

To enable streaming, you can set the `stream` parameter to `true` in your request. The model will
then stream the response to the client in chunks, rather than returning the entire response at
once.

https://openrouter.ai/docs/llms-full.txt 53/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

Here is an example of how to stream a response, and process it:

<Template
data={{
API_KEY_REF,
MODEL: Model.GPT_4_Omni
}}
>
<CodeGroup>
```python Python
import requests
import json

question = "How would you build the tallest building ever?"

url = "https://openrouter.ai/api/v1/chat/completions"
headers = {
"Authorization": f"Bearer {{API_KEY_REF}}",
"Content-Type": "application/json"
}

payload = {
"model": "{{MODEL}}",
"messages": [{"role": "user", "content": question}],
"stream": True
}

buffer = ""
with requests.post(url, headers=headers, json=payload, stream=True) as r:
for chunk in r.iter_content(chunk_size=1024, decode_unicode=True):
buffer += chunk
while True:
try:
# Find the next complete SSE line
line_end = buffer.find('\n')
if line_end == -1:
break

line = buffer[:line_end].strip()
buffer = buffer[line_end + 1:]

if line.startswith('data: '):
data = line[6:]
if data == '[DONE]':
break

try:
data_obj = json.loads(data)
content = data_obj["choices"][0]["delta"].get("content")
if content:
print(content, end="", flush=True)
except json.JSONDecodeError:
pass
except Exception:
break
```

```typescript TypeScript
const question = 'How would you build the tallest building ever?';
const response = await fetch('https://openrouter.ai/api/v1/chat/completions', {
method: 'POST',
headers: {
Authorization: `Bearer ${API_KEY_REF}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: '{{MODEL}}',
messages: [{ role: 'user', content: question }],

https://openrouter.ai/docs/llms-full.txt 54/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
stream: true,
}),
});

const reader = response.body?.getReader();


if (!reader) {
throw new Error('Response body is not readable');
}

const decoder = new TextDecoder();


let buffer = '';

try {
while (true) {
const { done, value } = await reader.read();
if (done) break;

// Append new chunk to buffer


buffer += decoder.decode(value, { stream: true });

// Process complete lines from buffer


while (true) {
const lineEnd = buffer.indexOf('\n');
if (lineEnd === -1) break;

const line = buffer.slice(0, lineEnd).trim();


buffer = buffer.slice(lineEnd + 1);

if (line.startsWith('data: ')) {
const data = line.slice(6);
if (data === '[DONE]') break;

try {
const parsed = JSON.parse(data);
const content = parsed.choices[0].delta.content;
if (content) {
console.log(content);
}
} catch (e) {
// Ignore invalid JSON
}
}
}
}
} finally {
reader.cancel();
}
```
</CodeGroup>
</Template>

### Additional Information

For SSE (Server-Sent Events) streams, OpenRouter occasionally sends comments to prevent connection
timeouts. These comments look like:

```text
: OPENROUTER PROCESSING
```

Comment payload can be safely ignored per the [SSE specs]


(https://html.spec.whatwg.org/multipage/server-sent-events.html#event-stream-interpretation).
However, you can leverage it to improve UX as needed, e.g. by showing a dynamic loading indicator.

Some SSE client implementations might not parse the payload according to spec, which leads to an
uncaught error when you `JSON.stringify` the non-JSON payloads. We recommend the following
clients:

* [eventsource-parser](https://github.com/rexxars/eventsource-parser)

https://openrouter.ai/docs/llms-full.txt 55/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
* [OpenAI SDK](https://www.npmjs.com/package/openai)
* [Vercel AI SDK](https://www.npmjs.com/package/ai)

### Stream Cancellation

Streaming requests can be cancelled by aborting the connection. For supported providers, this
immediately stops model processing and billing.

<Accordion title="Provider Support">


**Supported**

* OpenAI, Azure, Anthropic


* Fireworks, Mancer, Recursal
* AnyScale, Lepton, OctoAI
* Novita, DeepInfra, Together
* Cohere, Hyperbolic, Infermatic
* Avian, XAI, Cloudflare
* SFCompute, Nineteen, Liquid
* Friendli, Chutes, DeepSeek

**Not Currently Supported**

* AWS Bedrock, Groq, Modal


* Google, Google AI Studio, Minimax
* HuggingFace, Replicate, Perplexity
* Mistral, AI21, Featherless
* Lynn, Lambda, Reflection
* SambaNova, Inflection, ZeroOneAI
* AionLabs, Alibaba, Nebius
* Kluster, Targon, InferenceNet
</Accordion>

To implement stream cancellation:

<Template
data={{
API_KEY_REF,
MODEL: Model.GPT_4_Omni
}}
>
<CodeGroup>
```python Python
import requests
from threading import Event, Thread

def stream_with_cancellation(prompt: str, cancel_event: Event):


with requests.Session() as session:
response = session.post(
"https://openrouter.ai/api/v1/chat/completions",
headers={"Authorization": f"Bearer {{API_KEY_REF}}"},
json={"model": "{{MODEL}}", "messages": [{"role": "user", "content": prompt}],
"stream": True},
stream=True
)

try:
for line in response.iter_lines():
if cancel_event.is_set():
response.close()
return
if line:
print(line.decode(), end="", flush=True)
finally:
response.close()

# Example usage:
cancel_event = Event()
stream_thread = Thread(target=lambda: stream_with_cancellation("Write a story", cancel_event))
stream_thread.start()

https://openrouter.ai/docs/llms-full.txt 56/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

# To cancel the stream:


cancel_event.set()
```

```typescript TypeScript
const controller = new AbortController();

try {
const response = await fetch(
'https://openrouter.ai/api/v1/chat/completions',
{
method: 'POST',
headers: {
Authorization: `Bearer ${{{API_KEY_REF}}}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: '{{MODEL}}',
messages: [{ role: 'user', content: 'Write a story' }],
stream: true,
}),
signal: controller.signal,
},
);

// Process the stream...


} catch (error) {
if (error.name === 'AbortError') {
console.log('Stream cancelled');
} else {
throw error;
}
}

// To cancel the stream:


controller.abort();
```
</CodeGroup>
</Template>

<Warning>
Cancellation only works for streaming requests with supported providers. For
non-streaming requests or unsupported providers, the model will continue
processing and you will be billed for the complete response.
</Warning>

# Limits

> Learn about OpenRouter's API rate limits, credit-based quotas, and DDoS protection. Configure
and monitor your model usage limits effectively.

<Tip>
If you need a lot of inference, making additional accounts or API keys *makes
no difference*. We manage the rate limit globally. We do however have
different rate limits for different models, so you can share the load that way
if you do run into issues. If you start getting rate limited -- [tell
us](https://discord.gg/fVyRaUDgxW)! We are here to help. If you are able,
don't specify providers; that will let us load balance it better.
</Tip>

## Rate Limits and Credits Remaining

To check the rate limit or credits left on an API key, make a GET request to
`https://openrouter.ai/api/v1/auth/key`.

<Template data={{ API_KEY_REF }}>


<CodeGroup>

https://openrouter.ai/docs/llms-full.txt 57/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
```typescript title="TypeScript"
const response = await fetch('https://openrouter.ai/api/v1/auth/key', {
method: 'GET',
headers: {
Authorization: 'Bearer {{API_KEY_REF}}',
},
});
```

```python title="Python"
import requests
import json

response = requests.get(
url="https://openrouter.ai/api/v1/auth/key",
headers={
"Authorization": f"Bearer {{API_KEY_REF}}"
}
)

print(json.dumps(response.json(), indent=2))
```
</CodeGroup>
</Template>

If you submit a valid API key, you should get a response of the form:

```typescript title="TypeScript"
type Key = {
data: {
label: string;
usage: number; // Number of credits used
limit: number | null; // Credit limit for the key, or null if unlimited
is_free_tier: boolean; // Whether the user has paid for credits before
rate_limit: {
requests: number; // Number of requests allowed...
interval: string; // in this interval, e.g. "10s"
};
};
};
```

There are a few rate limits that apply to certain types of requests, regardless of account status:

1. Free usage limits: If you're using a free model variant (with an ID ending in <code>{sep}
{Variant.Free}</code>), you can make up to {FREE_MODEL_RATE_LIMIT_RPM} requests per minute. The
following per-day limits apply:

* If you have purchased less than {FREE_MODEL_CREDITS_THRESHOLD} credits, you're limited to


{FREE_MODEL_NO_CREDITS_RPD} <code>{sep}{Variant.Free}</code> model requests per day.

* If you purchase at least {FREE_MODEL_CREDITS_THRESHOLD} credits, your daily limit is increased


to {FREE_MODEL_HAS_CREDITS_RPD} <code>{sep}{Variant.Free}</code> model requests per day.

2. **DDoS protection**: Cloudflare's DDoS protection will block requests that dramatically exceed
reasonable usage.

For all other requests, rate limits are a function of the number of credits remaining on the key
or account. Partial credits round up in your favor. For the credits available on your API key, you
can make **1 request per credit per second** up to the surge limit (typically 500 requests per
second, but you can go higher).

For example:

* 0.5 credits → 1 req/s (minimum)


* 5 credits → 5 req/s
* 10 credits → 10 req/s
* 500 credits → 500 req/s
* 1000 credits → Contact us if you see ratelimiting from OpenRouter

https://openrouter.ai/docs/llms-full.txt 58/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

If your account has a negative credit balance, you may see <code>
{HTTPStatus.S402_Payment_Required}</code> errors, including for free models. Adding credits to put
your balance above zero allows you to use those models again.

# Authentication

> Learn how to authenticate with OpenRouter using API keys and Bearer tokens. Complete guide to
secure authentication methods and best practices.

You can cover model costs with OpenRouter API keys.

Our API authenticates requests using Bearer tokens. This allows you to use `curl` or the [OpenAI
SDK](https://platform.openai.com/docs/frameworks) directly with OpenRouter.

<Warning>
API keys on OpenRouter are more powerful than keys used directly for model APIs.

They allow users to set credit limits for apps, and they can be used in [OAuth](/docs/use-
cases/oauth-pkce) flows.
</Warning>

## Using an API key

To use an API key, [first create your key](https://openrouter.ai/keys). Give it a name and you can
optionally set a credit limit.

If you're calling the OpenRouter API directly, set the `Authorization` header to a Bearer token
with your API key.

If you're using the OpenAI Typescript SDK, set the `api_base` to `https://openrouter.ai/api/v1`
and the `apiKey` to your API key.

<CodeGroup>
```typescript title="TypeScript (Bearer Token)"
fetch('https://openrouter.ai/api/v1/chat/completions', {
method: 'POST',
headers: {
Authorization: 'Bearer <OPENROUTER_API_KEY>',
'HTTP-Referer': '<YOUR_SITE_URL>', // Optional. Site URL for rankings on openrouter.ai.
'X-Title': '<YOUR_SITE_NAME>', // Optional. Site title for rankings on openrouter.ai.
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'openai/gpt-4o',
messages: [
{
role: 'user',
content: 'What is the meaning of life?',
},
],
}),
});
```

```typescript title="TypeScript (OpenAI SDK)"


import OpenAI from 'openai';

const openai = new OpenAI({


baseURL: 'https://openrouter.ai/api/v1',
apiKey: '<OPENROUTER_API_KEY>',
defaultHeaders: {
'HTTP-Referer': '<YOUR_SITE_URL>', // Optional. Site URL for rankings on openrouter.ai.
'X-Title': '<YOUR_SITE_NAME>', // Optional. Site title for rankings on openrouter.ai.
},
});

async function main() {

https://openrouter.ai/docs/llms-full.txt 59/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
const completion = await openai.chat.completions.create({
model: 'openai/gpt-4o',
messages: [{ role: 'user', content: 'Say this is a test' }],
});

console.log(completion.choices[0].message);
}

main();
```

```python title="Python"
import openai

openai.api_base = "https://openrouter.ai/api/v1"
openai.api_key = "<OPENROUTER_API_KEY>"

response = openai.ChatCompletion.create(
model="openai/gpt-4o",
messages=[...],
headers={
"HTTP-Referer": "<YOUR_SITE_URL>", # Optional. Site URL for rankings on openrouter.ai.
"X-Title": "<YOUR_SITE_NAME>", # Optional. Site title for rankings on openrouter.ai.
},
)

reply = response.choices[0].message
```

```shell title="Shell"
curl https://openrouter.ai/api/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENROUTER_API_KEY" \
-d '{
"model": "openai/gpt-4o",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
}'
```
</CodeGroup>

To stream with Python, [see this example from OpenAI](https://github.com/openai/openai-


cookbook/blob/main/examples/How_to_stream_completions.ipynb).

## If your key has been exposed

<Warning>
You must protect your API keys and never commit them to public repositories.
</Warning>

OpenRouter is a GitHub secret scanning partner, and has other methods to detect exposed keys. If
we determine that your key has been compromised, you will receive an email notification.

If you receive such a notification or suspect your key has been exposed, immediately visit [your
key settings page](https://openrouter.ai/settings/keys) to delete the compromised key and create a
new one.

Using environment variables and keeping keys out of your codebase is strongly recommended.

# Parameters

> Learn about all available parameters for OpenRouter API requests. Configure temperature, max
tokens, top_p, and other model-specific settings.

Sampling parameters shape the token generation process of the model. You may send any parameters
from the following list, as well as others, to OpenRouter.

https://openrouter.ai/docs/llms-full.txt 60/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

OpenRouter will default to the values listed below if certain parameters are absent from your
request (for example, `temperature` to 1.0). We will also transmit some provider-specific
parameters, such as `safe_prompt` for Mistral or `raw_mode` for Hyperbolic directly to the
respective providers if specified.

Please refer to the model’s provider section to confirm which parameters are supported. For
detailed guidance on managing provider-specific parameters, [click here](/docs/features/provider-
routing#requiring-providers-to-support-all-parameters-beta).

## Temperature

* Key: `temperature`

* Optional, **float**, 0.0 to 2.0

* Default: 1.0

* Explainer Video: [Watch](https://youtu.be/ezgqHnWvua8)

This setting influences the variety in the model's responses. Lower values lead to more
predictable and typical responses, while higher values encourage more diverse and less common
responses. At 0, the model always gives the same response for a given input.

## Top P

* Key: `top_p`

* Optional, **float**, 0.0 to 1.0

* Default: 1.0

* Explainer Video: [Watch](https://youtu.be/wQP-im_HInk)

This setting limits the model's choices to a percentage of likely tokens: only the top tokens
whose probabilities add up to P. A lower value makes the model's responses more predictable, while
the default setting allows for a full range of token choices. Think of it like a dynamic Top-K.

## Top K

* Key: `top_k`

* Optional, **integer**, 0 or above

* Default: 0

* Explainer Video: [Watch](https://youtu.be/EbZv6-N8Xlk)

This limits the model's choice of tokens at each step, making it choose from a smaller set. A
value of 1 means the model will always pick the most likely next token, leading to predictable
results. By default this setting is disabled, making the model to consider all choices.

## Frequency Penalty

* Key: `frequency_penalty`

* Optional, **float**, -2.0 to 2.0

* Default: 0.0

* Explainer Video: [Watch](https://youtu.be/p4gl6fqI0_w)

This setting aims to control the repetition of tokens based on how often they appear in the input.
It tries to use less frequently those tokens that appear more in the input, proportional to how
frequently they occur. Token penalty scales with the number of occurrences. Negative values will
encourage token reuse.

## Presence Penalty

https://openrouter.ai/docs/llms-full.txt 61/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
* Key: `presence_penalty`

* Optional, **float**, -2.0 to 2.0

* Default: 0.0

* Explainer Video: [Watch](https://youtu.be/MwHG5HL-P74)

Adjusts how often the model repeats specific tokens already used in the input. Higher values make
such repetition less likely, while negative values do the opposite. Token penalty does not scale
with the number of occurrences. Negative values will encourage token reuse.

## Repetition Penalty

* Key: `repetition_penalty`

* Optional, **float**, 0.0 to 2.0

* Default: 1.0

* Explainer Video: [Watch](https://youtu.be/LHjGAnLm3DM)

Helps to reduce the repetition of tokens from the input. A higher value makes the model less
likely to repeat tokens, but too high a value can make the output less coherent (often with run-on
sentences that lack small words). Token penalty scales based on original token's probability.

## Min P

* Key: `min_p`

* Optional, **float**, 0.0 to 1.0

* Default: 0.0

Represents the minimum probability for a token to be


considered, relative to the probability of the most likely token. (The value changes depending on
the confidence level of the most probable token.) If your Min-P is set to 0.1, that means it will
only allow for tokens that are at least 1/10th as probable as the best possible option.

## Top A

* Key: `top_a`

* Optional, **float**, 0.0 to 1.0

* Default: 0.0

Consider only the top tokens with "sufficiently high" probabilities based on the probability of
the most likely token. Think of it like a dynamic Top-P. A lower Top-A value focuses the choices
based on the highest probability token but with a narrower scope. A higher Top-A value does not
necessarily affect the creativity of the output, but rather refines the filtering process based on
the maximum probability.

## Seed

* Key: `seed`

* Optional, **integer**

If specified, the inferencing will sample deterministically, such that repeated requests with the
same seed and parameters should return the same result. Determinism is not guaranteed for some
models.

## Max Tokens

* Key: `max_tokens`

* Optional, **integer**, 1 or above

https://openrouter.ai/docs/llms-full.txt 62/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
This sets the upper limit for the number of tokens the model can generate in response. It won't
produce more than this limit. The maximum value is the context length minus the prompt length.

## Logit Bias

* Key: `logit_bias`

* Optional, **map**

Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an
associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated
by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1
should decrease or increase likelihood of selection; values like -100 or 100 should result in a
ban or exclusive selection of the relevant token.

## Logprobs

* Key: `logprobs`

* Optional, **boolean**

Whether to return log probabilities of the output tokens or not. If true, returns the log
probabilities of each output token returned.

## Top Logprobs

* Key: `top_logprobs`

* Optional, **integer**

An integer between 0 and 20 specifying the number of most likely tokens to return at each token
position, each with an associated log probability. logprobs must be set to true if this parameter
is used.

## Response Format

* Key: `response_format`

* Optional, **map**

Forces the model to produce specific output format. Setting to `{ "type": "json_object" }` enables
JSON mode, which guarantees the message the model generates is valid JSON.

**Note**: when using JSON mode, you should also instruct the model to produce JSON yourself via a
system or user message.

## Structured Outputs

* Key: `structured_outputs`

* Optional, **boolean**

If the model can return structured outputs using response\_format json\_schema.

## Stop

* Key: `stop`

* Optional, **array**

Stop generation immediately if the model encounter any token specified in the stop array.

## Tools

* Key: `tools`

* Optional, **array**

Tool calling parameter, following OpenAI's tool calling request shape. For non-OpenAI providers,

https://openrouter.ai/docs/llms-full.txt 63/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
it will be transformed accordingly. [Click here to learn more about tool calling]
(/docs/requests#tool-calls)

## Tool Choice

* Key: `tool_choice`

* Optional, **array**

Controls which (if any) tool is called by the model. 'none' means the model will not call any tool
and instead generates a message. 'auto' means the model can pick between generating a message or
calling one or more tools. 'required' means the model must call one or more tools. Specifying a
particular tool via `{"type": "function", "function": {"name": "my_function"}}` forces the model
to call that tool.

# Errors

> Learn how to handle errors in OpenRouter API interactions. Comprehensive guide to error codes,
messages, and best practices for error handling.

For errors, OpenRouter returns a JSON response with the following shape:

```typescript
type ErrorResponse = {
error: {
code: number;
message: string;
metadata?: Record<string, unknown>;
};
};
```

The HTTP Response will have the same status code as `error.code`, forming a request error if:

* Your original request is invalid


* Your API key/account is out of credits

Otherwise, the returned HTTP response status will be <code>{HTTPStatus.S200_OK}</code> and any
error occurred while the LLM is producing the output will be emitted in the response body or as an
SSE data event.

Example code for printing errors in JavaScript:

```typescript
const request = await fetch('https://openrouter.ai/...');
console.log(request.status); // Will be an error code unless the model started processing your
request
const response = await request.json();
console.error(response.error?.status); // Will be an error code
console.error(response.error?.message);
```

## Error Codes

* **{HTTPStatus.S400_Bad_Request}**: Bad Request (invalid or missing params, CORS)


* **{HTTPStatus.S401_Unauthorized}**: Invalid credentials (OAuth session expired, disabled/invalid
API key)
* **{HTTPStatus.S402_Payment_Required}**: Your account or API key has insufficient credits. Add
more credits and retry the request.
* **{HTTPStatus.S403_Forbidden}**: Your chosen model requires moderation and your input was
flagged
* **{HTTPStatus.S408_Request_Timeout}**: Your request timed out
* **{HTTPStatus.S429_Too_Many_Requests}**: You are being rate limited
* **{HTTPStatus.S502_Bad_Gateway}**: Your chosen model is down or we received an invalid response
from it
* **{HTTPStatus.S503_Service_Unavailable}**: There is no available model provider that meets your
routing requirements

https://openrouter.ai/docs/llms-full.txt 64/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
## Moderation Errors

If your input was flagged, the `error.metadata` will contain information about the issue. The
shape of the metadata is as follows:

```typescript
type ModerationErrorMetadata = {
reasons: string[]; // Why your input was flagged
flagged_input: string; // The text segment that was flagged, limited to 100 characters. If the
flagged input is longer than 100 characters, it will be truncated in the middle and replaced with
...
provider_name: string; // The name of the provider that requested moderation
model_slug: string;
};
```

## Provider Errors

If the model provider encounters an error, the `error.metadata` will contain information about the
issue. The shape of the metadata is as follows:

```typescript
type ProviderErrorMetadata = {
provider_name: string; // The name of the provider that encountered the error
raw: unknown; // The raw error from the provider
};
```

## When No Content is Generated

Occasionally, the model may not generate any content. This typically occurs when:

* The model is warming up from a cold start


* The system is scaling up to handle more requests

Warm-up times usually range from a few seconds to a few minutes, depending on the model and
provider.

If you encounter persistent no-content issues, consider implementing a simple retry mechanism or
trying again with a different provider or model that has more recent activity.

Additionally, be aware that in some cases, you may still be charged for the prompt processing cost
by the upstream provider, even if no content is generated.

# Completion

```http
POST https://openrouter.ai/api/v1/completions
Content-Type: application/json
```

Send a completion request to a selected model (text-only format)

## Response Body

- 200: Successful completion

## Examples

```shell
curl -X POST https://openrouter.ai/api/v1/completions \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{
"model": "model",
"prompt": "prompt"

https://openrouter.ai/docs/llms-full.txt 65/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
}'
```

```python
import requests

url = "https://openrouter.ai/api/v1/completions"

payload = {
"model": "model",
"prompt": "prompt"
}
headers = {
"Authorization": "Bearer <token>",
"Content-Type": "application/json"
}

response = requests.post(url, json=payload, headers=headers)

print(response.json())
```

```javascript
const url = 'https://openrouter.ai/api/v1/completions';
const options = {
method: 'POST',
headers: {Authorization: 'Bearer <token>', 'Content-Type': 'application/json'},
body: '{"model":"model","prompt":"prompt"}'
};

try {
const response = await fetch(url, options);
const data = await response.json();
console.log(data);
} catch (error) {
console.error(error);
}
```

```go
package main

import (
"fmt"
"strings"
"net/http"
"io"
)

func main() {

url := "https://openrouter.ai/api/v1/completions"

payload := strings.NewReader("{\n \"model\": \"model\",\n \"prompt\": \"prompt\"\n}")

req, _ := http.NewRequest("POST", url, payload)

req.Header.Add("Authorization", "Bearer <token>")


req.Header.Add("Content-Type", "application/json")

res, _ := http.DefaultClient.Do(req)

defer res.Body.Close()
body, _ := io.ReadAll(res.Body)

fmt.Println(res)
fmt.Println(string(body))

https://openrouter.ai/docs/llms-full.txt 66/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
```

```ruby
require 'uri'
require 'net/http'

url = URI("https://openrouter.ai/api/v1/completions")

http = Net::HTTP.new(url.host, url.port)


http.use_ssl = true

request = Net::HTTP::Post.new(url)
request["Authorization"] = 'Bearer <token>'
request["Content-Type"] = 'application/json'
request.body = "{\n \"model\": \"model\",\n \"prompt\": \"prompt\"\n}"

response = http.request(request)
puts response.read_body
```

```java
HttpResponse<String> response = Unirest.post("https://openrouter.ai/api/v1/completions")
.header("Authorization", "Bearer <token>")
.header("Content-Type", "application/json")
.body("{\n \"model\": \"model\",\n \"prompt\": \"prompt\"\n}")
.asString();
```

```php
<?php

$client = new \GuzzleHttp\Client();

$response = $client->request('POST', 'https://openrouter.ai/api/v1/completions', [


'body' => '{
"model": "model",
"prompt": "prompt"
}',
'headers' => [
'Authorization' => 'Bearer <token>',
'Content-Type' => 'application/json',
],
]);

echo $response->getBody();
```

```csharp
var client = new RestClient("https://openrouter.ai/api/v1/completions");
var request = new RestRequest(Method.POST);
request.AddHeader("Authorization", "Bearer <token>");
request.AddHeader("Content-Type", "application/json");
request.AddParameter("application/json", "{\n \"model\": \"model\",\n \"prompt\":
\"prompt\"\n}", ParameterType.RequestBody);
IRestResponse response = client.Execute(request);
```

```swift
import Foundation

let headers = [
"Authorization": "Bearer <token>",
"Content-Type": "application/json"
]
let parameters = [
"model": "model",
"prompt": "prompt"
] as [String : Any]

https://openrouter.ai/docs/llms-full.txt 67/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
let postData = JSONSerialization.data(withJSONObject: parameters, options: [])

let request = NSMutableURLRequest(url: NSURL(string: "https://openrouter.ai/api/v1/completions")!


as URL,
cachePolicy: .useProtocolCachePolicy,
timeoutInterval: 10.0)
request.httpMethod = "POST"
request.allHTTPHeaderFields = headers
request.httpBody = postData as Data

let session = URLSession.shared


let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response,
error) -> Void in
if (error != nil) {
print(error as Any)
} else {
let httpResponse = response as? HTTPURLResponse
print(httpResponse)
}
})

dataTask.resume()
```

# Chat completion

```http
POST https://openrouter.ai/api/v1/chat/completions
Content-Type: application/json
```

Send a chat completion request to a selected model. The request must contain a "messages" array.
All advanced options from the base request are also supported.

## Response Body

- 200: Successful completion

## Examples

```shell
curl -X POST https://openrouter.ai/api/v1/chat/completions \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "What is the meaning of life?"
}
]
}'
```

```python
import requests

url = "https://openrouter.ai/api/v1/chat/completions"

payload = { "model": "openai/gpt-3.5-turbo" }


headers = {
"Authorization": "Bearer <token>",
"Content-Type": "application/json"
}

response = requests.post(url, json=payload, headers=headers)

https://openrouter.ai/docs/llms-full.txt 68/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

print(response.json())
```

```javascript
const url = 'https://openrouter.ai/api/v1/chat/completions';
const options = {
method: 'POST',
headers: {Authorization: 'Bearer <token>', 'Content-Type': 'application/json'},
body: '{"model":"openai/gpt-3.5-turbo"}'
};

try {
const response = await fetch(url, options);
const data = await response.json();
console.log(data);
} catch (error) {
console.error(error);
}
```

```go
package main

import (
"fmt"
"strings"
"net/http"
"io"
)

func main() {

url := "https://openrouter.ai/api/v1/chat/completions"

payload := strings.NewReader("{\n \"model\": \"openai/gpt-3.5-turbo\"\n}")

req, _ := http.NewRequest("POST", url, payload)

req.Header.Add("Authorization", "Bearer <token>")


req.Header.Add("Content-Type", "application/json")

res, _ := http.DefaultClient.Do(req)

defer res.Body.Close()
body, _ := io.ReadAll(res.Body)

fmt.Println(res)
fmt.Println(string(body))

}
```

```ruby
require 'uri'
require 'net/http'

url = URI("https://openrouter.ai/api/v1/chat/completions")

http = Net::HTTP.new(url.host, url.port)


http.use_ssl = true

request = Net::HTTP::Post.new(url)
request["Authorization"] = 'Bearer <token>'
request["Content-Type"] = 'application/json'
request.body = "{\n \"model\": \"openai/gpt-3.5-turbo\"\n}"

response = http.request(request)
puts response.read_body

https://openrouter.ai/docs/llms-full.txt 69/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
```

```java
HttpResponse<String> response = Unirest.post("https://openrouter.ai/api/v1/chat/completions")
.header("Authorization", "Bearer <token>")
.header("Content-Type", "application/json")
.body("{\n \"model\": \"openai/gpt-3.5-turbo\"\n}")
.asString();
```

```php
<?php

$client = new \GuzzleHttp\Client();

$response = $client->request('POST', 'https://openrouter.ai/api/v1/chat/completions', [


'body' => '{
"model": "openai/gpt-3.5-turbo"
}',
'headers' => [
'Authorization' => 'Bearer <token>',
'Content-Type' => 'application/json',
],
]);

echo $response->getBody();
```

```csharp
var client = new RestClient("https://openrouter.ai/api/v1/chat/completions");
var request = new RestRequest(Method.POST);
request.AddHeader("Authorization", "Bearer <token>");
request.AddHeader("Content-Type", "application/json");
request.AddParameter("application/json", "{\n \"model\": \"openai/gpt-3.5-turbo\"\n}",
ParameterType.RequestBody);
IRestResponse response = client.Execute(request);
```

```swift
import Foundation

let headers = [
"Authorization": "Bearer <token>",
"Content-Type": "application/json"
]
let parameters = ["model": "openai/gpt-3.5-turbo"] as [String : Any]

let postData = JSONSerialization.data(withJSONObject: parameters, options: [])

let request = NSMutableURLRequest(url: NSURL(string:


"https://openrouter.ai/api/v1/chat/completions")! as URL,
cachePolicy: .useProtocolCachePolicy,
timeoutInterval: 10.0)
request.httpMethod = "POST"
request.allHTTPHeaderFields = headers
request.httpBody = postData as Data

let session = URLSession.shared


let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response,
error) -> Void in
if (error != nil) {
print(error as Any)
} else {
let httpResponse = response as? HTTPURLResponse
print(httpResponse)
}
})

dataTask.resume()

https://openrouter.ai/docs/llms-full.txt 70/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
```

# Get a generation

```http
GET https://openrouter.ai/api/v1/generation
```

Returns metadata about a specific generation request

## Query Parameters

- Id (required)

## Response Body

- 200: Returns the request metadata for this generation

## Examples

```shell
curl -G https://openrouter.ai/api/v1/generation \
-H "Authorization: Bearer <token>" \
-d id=id
```

```python
import requests

url = "https://openrouter.ai/api/v1/generation"

querystring = {"id":"id"}

headers = {"Authorization": "Bearer <token>"}

response = requests.get(url, headers=headers, params=querystring)

print(response.json())
```

```javascript
const url = 'https://openrouter.ai/api/v1/generation?id=id';
const options = {method: 'GET', headers: {Authorization: 'Bearer <token>'}};

try {
const response = await fetch(url, options);
const data = await response.json();
console.log(data);
} catch (error) {
console.error(error);
}
```

```go
package main

import (
"fmt"
"net/http"
"io"
)

func main() {

url := "https://openrouter.ai/api/v1/generation?id=id"

req, _ := http.NewRequest("GET", url, nil)

https://openrouter.ai/docs/llms-full.txt 71/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

req.Header.Add("Authorization", "Bearer <token>")

res, _ := http.DefaultClient.Do(req)

defer res.Body.Close()
body, _ := io.ReadAll(res.Body)

fmt.Println(res)
fmt.Println(string(body))

}
```

```ruby
require 'uri'
require 'net/http'

url = URI("https://openrouter.ai/api/v1/generation?id=id")

http = Net::HTTP.new(url.host, url.port)


http.use_ssl = true

request = Net::HTTP::Get.new(url)
request["Authorization"] = 'Bearer <token>'

response = http.request(request)
puts response.read_body
```

```java
HttpResponse<String> response = Unirest.get("https://openrouter.ai/api/v1/generation?id=id")
.header("Authorization", "Bearer <token>")
.asString();
```

```php
<?php

$client = new \GuzzleHttp\Client();

$response = $client->request('GET', 'https://openrouter.ai/api/v1/generation?id=id', [


'headers' => [
'Authorization' => 'Bearer <token>',
],
]);

echo $response->getBody();
```

```csharp
var client = new RestClient("https://openrouter.ai/api/v1/generation?id=id");
var request = new RestRequest(Method.GET);
request.AddHeader("Authorization", "Bearer <token>");
IRestResponse response = client.Execute(request);
```

```swift
import Foundation

let headers = ["Authorization": "Bearer <token>"]

let request = NSMutableURLRequest(url: NSURL(string: "https://openrouter.ai/api/v1/generation?


id=id")! as URL,
cachePolicy: .useProtocolCachePolicy,
timeoutInterval: 10.0)
request.httpMethod = "GET"
request.allHTTPHeaderFields = headers

https://openrouter.ai/docs/llms-full.txt 72/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
let session = URLSession.shared
let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response,
error) -> Void in
if (error != nil) {
print(error as Any)
} else {
let httpResponse = response as? HTTPURLResponse
print(httpResponse)
}
})

dataTask.resume()
```

# List available models

```http
GET https://openrouter.ai/api/v1/models
```

Returns a list of models available through the API.


Note: `supported_parameters` is a union of all parameters supported by all providers for this
model.
There may not be a single provider which offers all of the listed parameters for a model.

## Response Body

- 200: List of available models

## Examples

```shell
curl https://openrouter.ai/api/v1/models
```

```python
import requests

url = "https://openrouter.ai/api/v1/models"

response = requests.get(url)

print(response.json())
```

```javascript
const url = 'https://openrouter.ai/api/v1/models';
const options = {method: 'GET'};

try {
const response = await fetch(url, options);
const data = await response.json();
console.log(data);
} catch (error) {
console.error(error);
}
```

```go
package main

import (
"fmt"
"net/http"
"io"
)

https://openrouter.ai/docs/llms-full.txt 73/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
func main() {

url := "https://openrouter.ai/api/v1/models"

req, _ := http.NewRequest("GET", url, nil)

res, _ := http.DefaultClient.Do(req)

defer res.Body.Close()
body, _ := io.ReadAll(res.Body)

fmt.Println(res)
fmt.Println(string(body))

}
```

```ruby
require 'uri'
require 'net/http'

url = URI("https://openrouter.ai/api/v1/models")

http = Net::HTTP.new(url.host, url.port)


http.use_ssl = true

request = Net::HTTP::Get.new(url)

response = http.request(request)
puts response.read_body
```

```java
HttpResponse<String> response = Unirest.get("https://openrouter.ai/api/v1/models")
.asString();
```

```php
<?php

$client = new \GuzzleHttp\Client();

$response = $client->request('GET', 'https://openrouter.ai/api/v1/models');

echo $response->getBody();
```

```csharp
var client = new RestClient("https://openrouter.ai/api/v1/models");
var request = new RestRequest(Method.GET);
IRestResponse response = client.Execute(request);
```

```swift
import Foundation

let request = NSMutableURLRequest(url: NSURL(string: "https://openrouter.ai/api/v1/models")! as


URL,
cachePolicy: .useProtocolCachePolicy,
timeoutInterval: 10.0)
request.httpMethod = "GET"

let session = URLSession.shared


let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response,
error) -> Void in
if (error != nil) {
print(error as Any)
} else {
let httpResponse = response as? HTTPURLResponse

https://openrouter.ai/docs/llms-full.txt 74/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
print(httpResponse)
}
})

dataTask.resume()
```

# List endpoints for a model

```http
GET https://openrouter.ai/api/v1/models/{author}/{slug}/endpoints
```

## Path Parameters

- Author (required)
- Slug (required)

## Response Body

- 200: List of endpoints for the model

## Examples

```shell
curl https://openrouter.ai/api/v1/models/author/slug/endpoints
```

```python
import requests

url = "https://openrouter.ai/api/v1/models/author/slug/endpoints"

response = requests.get(url)

print(response.json())
```

```javascript
const url = 'https://openrouter.ai/api/v1/models/author/slug/endpoints';
const options = {method: 'GET'};

try {
const response = await fetch(url, options);
const data = await response.json();
console.log(data);
} catch (error) {
console.error(error);
}
```

```go
package main

import (
"fmt"
"net/http"
"io"
)

func main() {

url := "https://openrouter.ai/api/v1/models/author/slug/endpoints"

req, _ := http.NewRequest("GET", url, nil)

res, _ := http.DefaultClient.Do(req)

https://openrouter.ai/docs/llms-full.txt 75/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

defer res.Body.Close()
body, _ := io.ReadAll(res.Body)

fmt.Println(res)
fmt.Println(string(body))

}
```

```ruby
require 'uri'
require 'net/http'

url = URI("https://openrouter.ai/api/v1/models/author/slug/endpoints")

http = Net::HTTP.new(url.host, url.port)


http.use_ssl = true

request = Net::HTTP::Get.new(url)

response = http.request(request)
puts response.read_body
```

```java
HttpResponse<String> response =
Unirest.get("https://openrouter.ai/api/v1/models/author/slug/endpoints")
.asString();
```

```php
<?php

$client = new \GuzzleHttp\Client();

$response = $client->request('GET', 'https://openrouter.ai/api/v1/models/author/slug/endpoints');

echo $response->getBody();
```

```csharp
var client = new RestClient("https://openrouter.ai/api/v1/models/author/slug/endpoints");
var request = new RestRequest(Method.GET);
IRestResponse response = client.Execute(request);
```

```swift
import Foundation

let request = NSMutableURLRequest(url: NSURL(string:


"https://openrouter.ai/api/v1/models/author/slug/endpoints")! as URL,
cachePolicy: .useProtocolCachePolicy,
timeoutInterval: 10.0)
request.httpMethod = "GET"

let session = URLSession.shared


let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response,
error) -> Void in
if (error != nil) {
print(error as Any)
} else {
let httpResponse = response as? HTTPURLResponse
print(httpResponse)
}
})

dataTask.resume()
```

https://openrouter.ai/docs/llms-full.txt 76/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

# Get credits

```http
GET https://openrouter.ai/api/v1/credits
```

Returns the total credits purchased and used for the authenticated user

## Response Body

- 200: Returns the total credits purchased and used

## Examples

```shell
curl https://openrouter.ai/api/v1/credits \
-H "Authorization: Bearer <token>"
```

```python
import requests

url = "https://openrouter.ai/api/v1/credits"

headers = {"Authorization": "Bearer <token>"}

response = requests.get(url, headers=headers)

print(response.json())
```

```javascript
const url = 'https://openrouter.ai/api/v1/credits';
const options = {method: 'GET', headers: {Authorization: 'Bearer <token>'}};

try {
const response = await fetch(url, options);
const data = await response.json();
console.log(data);
} catch (error) {
console.error(error);
}
```

```go
package main

import (
"fmt"
"net/http"
"io"
)

func main() {

url := "https://openrouter.ai/api/v1/credits"

req, _ := http.NewRequest("GET", url, nil)

req.Header.Add("Authorization", "Bearer <token>")

res, _ := http.DefaultClient.Do(req)

defer res.Body.Close()
body, _ := io.ReadAll(res.Body)

https://openrouter.ai/docs/llms-full.txt 77/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
fmt.Println(res)
fmt.Println(string(body))

}
```

```ruby
require 'uri'
require 'net/http'

url = URI("https://openrouter.ai/api/v1/credits")

http = Net::HTTP.new(url.host, url.port)


http.use_ssl = true

request = Net::HTTP::Get.new(url)
request["Authorization"] = 'Bearer <token>'

response = http.request(request)
puts response.read_body
```

```java
HttpResponse<String> response = Unirest.get("https://openrouter.ai/api/v1/credits")
.header("Authorization", "Bearer <token>")
.asString();
```

```php
<?php

$client = new \GuzzleHttp\Client();

$response = $client->request('GET', 'https://openrouter.ai/api/v1/credits', [


'headers' => [
'Authorization' => 'Bearer <token>',
],
]);

echo $response->getBody();
```

```csharp
var client = new RestClient("https://openrouter.ai/api/v1/credits");
var request = new RestRequest(Method.GET);
request.AddHeader("Authorization", "Bearer <token>");
IRestResponse response = client.Execute(request);
```

```swift
import Foundation

let headers = ["Authorization": "Bearer <token>"]

let request = NSMutableURLRequest(url: NSURL(string: "https://openrouter.ai/api/v1/credits")! as


URL,
cachePolicy: .useProtocolCachePolicy,
timeoutInterval: 10.0)
request.httpMethod = "GET"
request.allHTTPHeaderFields = headers

let session = URLSession.shared


let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response,
error) -> Void in
if (error != nil) {
print(error as Any)
} else {
let httpResponse = response as? HTTPURLResponse
print(httpResponse)

https://openrouter.ai/docs/llms-full.txt 78/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
}
})

dataTask.resume()
```

# Create a Coinbase charge

```http
POST https://openrouter.ai/api/v1/credits/coinbase
Content-Type: application/json
```

Creates and hydrates a Coinbase Commerce charge for cryptocurrency payments

## Response Body

- 200: Returns the calldata to fulfill the transaction

## Examples

```shell
curl -X POST https://openrouter.ai/api/v1/credits/coinbase \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{
"amount": 1.1,
"sender": "sender",
"chain_id": 1
}'
```

```python
import requests

url = "https://openrouter.ai/api/v1/credits/coinbase"

payload = {
"amount": 1.1,
"sender": "sender",
"chain_id": 1
}
headers = {
"Authorization": "Bearer <token>",
"Content-Type": "application/json"
}

response = requests.post(url, json=payload, headers=headers)

print(response.json())
```

```javascript
const url = 'https://openrouter.ai/api/v1/credits/coinbase';
const options = {
method: 'POST',
headers: {Authorization: 'Bearer <token>', 'Content-Type': 'application/json'},
body: '{"amount":1.1,"sender":"sender","chain_id":1}'
};

try {
const response = await fetch(url, options);
const data = await response.json();
console.log(data);
} catch (error) {
console.error(error);
}

https://openrouter.ai/docs/llms-full.txt 79/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
```

```go
package main

import (
"fmt"
"strings"
"net/http"
"io"
)

func main() {

url := "https://openrouter.ai/api/v1/credits/coinbase"

payload := strings.NewReader("{\n \"amount\": 1.1,\n \"sender\": \"sender\",\n


\"chain_id\": 1\n}")

req, _ := http.NewRequest("POST", url, payload)

req.Header.Add("Authorization", "Bearer <token>")


req.Header.Add("Content-Type", "application/json")

res, _ := http.DefaultClient.Do(req)

defer res.Body.Close()
body, _ := io.ReadAll(res.Body)

fmt.Println(res)
fmt.Println(string(body))

}
```

```ruby
require 'uri'
require 'net/http'

url = URI("https://openrouter.ai/api/v1/credits/coinbase")

http = Net::HTTP.new(url.host, url.port)


http.use_ssl = true

request = Net::HTTP::Post.new(url)
request["Authorization"] = 'Bearer <token>'
request["Content-Type"] = 'application/json'
request.body = "{\n \"amount\": 1.1,\n \"sender\": \"sender\",\n \"chain_id\": 1\n}"

response = http.request(request)
puts response.read_body
```

```java
HttpResponse<String> response = Unirest.post("https://openrouter.ai/api/v1/credits/coinbase")
.header("Authorization", "Bearer <token>")
.header("Content-Type", "application/json")
.body("{\n \"amount\": 1.1,\n \"sender\": \"sender\",\n \"chain_id\": 1\n}")
.asString();
```

```php
<?php

$client = new \GuzzleHttp\Client();

$response = $client->request('POST', 'https://openrouter.ai/api/v1/credits/coinbase', [


'body' => '{
"amount": 1.1,

https://openrouter.ai/docs/llms-full.txt 80/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
"sender": "sender",
"chain_id": 1
}',
'headers' => [
'Authorization' => 'Bearer <token>',
'Content-Type' => 'application/json',
],
]);

echo $response->getBody();
```

```csharp
var client = new RestClient("https://openrouter.ai/api/v1/credits/coinbase");
var request = new RestRequest(Method.POST);
request.AddHeader("Authorization", "Bearer <token>");
request.AddHeader("Content-Type", "application/json");
request.AddParameter("application/json", "{\n \"amount\": 1.1,\n \"sender\": \"sender\",\n
\"chain_id\": 1\n}", ParameterType.RequestBody);
IRestResponse response = client.Execute(request);
```

```swift
import Foundation

let headers = [
"Authorization": "Bearer <token>",
"Content-Type": "application/json"
]
let parameters = [
"amount": 1.1,
"sender": "sender",
"chain_id": 1
] as [String : Any]

let postData = JSONSerialization.data(withJSONObject: parameters, options: [])

let request = NSMutableURLRequest(url: NSURL(string:


"https://openrouter.ai/api/v1/credits/coinbase")! as URL,
cachePolicy: .useProtocolCachePolicy,
timeoutInterval: 10.0)
request.httpMethod = "POST"
request.allHTTPHeaderFields = headers
request.httpBody = postData as Data

let session = URLSession.shared


let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response,
error) -> Void in
if (error != nil) {
print(error as Any)
} else {
let httpResponse = response as? HTTPURLResponse
print(httpResponse)
}
})

dataTask.resume()
```

# Exchange authorization code for API key

```http
POST https://openrouter.ai/api/v1/auth/keys
Content-Type: application/json
```

Exchange an authorization code from the PKCE flow for a user-controlled API key

https://openrouter.ai/docs/llms-full.txt 81/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

## Response Body

- 200: Successfully exchanged code for an API key


- 400: Invalid code parameter or invalid code_challenge_method
- 403: Invalid code or code_verifier or already used code
- 405: Method Not Allowed - Make sure you're using POST and HTTPS

## Examples

```shell
curl -X POST https://openrouter.ai/api/v1/auth/keys \
-H "Content-Type: application/json" \
-d '{
"code": "code"
}'
```

```python
import requests

url = "https://openrouter.ai/api/v1/auth/keys"

payload = { "code": "code" }


headers = {"Content-Type": "application/json"}

response = requests.post(url, json=payload, headers=headers)

print(response.json())
```

```javascript
const url = 'https://openrouter.ai/api/v1/auth/keys';
const options = {
method: 'POST',
headers: {'Content-Type': 'application/json'},
body: '{"code":"code"}'
};

try {
const response = await fetch(url, options);
const data = await response.json();
console.log(data);
} catch (error) {
console.error(error);
}
```

```go
package main

import (
"fmt"
"strings"
"net/http"
"io"
)

func main() {

url := "https://openrouter.ai/api/v1/auth/keys"

payload := strings.NewReader("{\n \"code\": \"code\"\n}")

req, _ := http.NewRequest("POST", url, payload)

req.Header.Add("Content-Type", "application/json")

res, _ := http.DefaultClient.Do(req)

https://openrouter.ai/docs/llms-full.txt 82/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

defer res.Body.Close()
body, _ := io.ReadAll(res.Body)

fmt.Println(res)
fmt.Println(string(body))

}
```

```ruby
require 'uri'
require 'net/http'

url = URI("https://openrouter.ai/api/v1/auth/keys")

http = Net::HTTP.new(url.host, url.port)


http.use_ssl = true

request = Net::HTTP::Post.new(url)
request["Content-Type"] = 'application/json'
request.body = "{\n \"code\": \"code\"\n}"

response = http.request(request)
puts response.read_body
```

```java
HttpResponse<String> response = Unirest.post("https://openrouter.ai/api/v1/auth/keys")
.header("Content-Type", "application/json")
.body("{\n \"code\": \"code\"\n}")
.asString();
```

```php
<?php

$client = new \GuzzleHttp\Client();

$response = $client->request('POST', 'https://openrouter.ai/api/v1/auth/keys', [


'body' => '{
"code": "code"
}',
'headers' => [
'Content-Type' => 'application/json',
],
]);

echo $response->getBody();
```

```csharp
var client = new RestClient("https://openrouter.ai/api/v1/auth/keys");
var request = new RestRequest(Method.POST);
request.AddHeader("Content-Type", "application/json");
request.AddParameter("application/json", "{\n \"code\": \"code\"\n}", ParameterType.RequestBody);
IRestResponse response = client.Execute(request);
```

```swift
import Foundation

let headers = ["Content-Type": "application/json"]


let parameters = ["code": "code"] as [String : Any]

let postData = JSONSerialization.data(withJSONObject: parameters, options: [])

let request = NSMutableURLRequest(url: NSURL(string: "https://openrouter.ai/api/v1/auth/keys")! as


URL,

https://openrouter.ai/docs/llms-full.txt 83/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
cachePolicy: .useProtocolCachePolicy,
timeoutInterval: 10.0)
request.httpMethod = "POST"
request.allHTTPHeaderFields = headers
request.httpBody = postData as Data

let session = URLSession.shared


let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response,
error) -> Void in
if (error != nil) {
print(error as Any)
} else {
let httpResponse = response as? HTTPURLResponse
print(httpResponse)
}
})

dataTask.resume()
```

```shell
curl -X POST https://openrouter.ai/api/v1/auth/keys \
-H "Content-Type: application/json" \
-d '{
"code": "string"
}'
```

```python
import requests

url = "https://openrouter.ai/api/v1/auth/keys"

payload = { "code": "string" }


headers = {"Content-Type": "application/json"}

response = requests.post(url, json=payload, headers=headers)

print(response.json())
```

```javascript
const url = 'https://openrouter.ai/api/v1/auth/keys';
const options = {
method: 'POST',
headers: {'Content-Type': 'application/json'},
body: '{"code":"string"}'
};

try {
const response = await fetch(url, options);
const data = await response.json();
console.log(data);
} catch (error) {
console.error(error);
}
```

```go
package main

import (
"fmt"
"strings"
"net/http"
"io"
)

func main() {

https://openrouter.ai/docs/llms-full.txt 84/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

url := "https://openrouter.ai/api/v1/auth/keys"

payload := strings.NewReader("{\n \"code\": \"string\"\n}")

req, _ := http.NewRequest("POST", url, payload)

req.Header.Add("Content-Type", "application/json")

res, _ := http.DefaultClient.Do(req)

defer res.Body.Close()
body, _ := io.ReadAll(res.Body)

fmt.Println(res)
fmt.Println(string(body))

}
```

```ruby
require 'uri'
require 'net/http'

url = URI("https://openrouter.ai/api/v1/auth/keys")

http = Net::HTTP.new(url.host, url.port)


http.use_ssl = true

request = Net::HTTP::Post.new(url)
request["Content-Type"] = 'application/json'
request.body = "{\n \"code\": \"string\"\n}"

response = http.request(request)
puts response.read_body
```

```java
HttpResponse<String> response = Unirest.post("https://openrouter.ai/api/v1/auth/keys")
.header("Content-Type", "application/json")
.body("{\n \"code\": \"string\"\n}")
.asString();
```

```php
<?php

$client = new \GuzzleHttp\Client();

$response = $client->request('POST', 'https://openrouter.ai/api/v1/auth/keys', [


'body' => '{
"code": "string"
}',
'headers' => [
'Content-Type' => 'application/json',
],
]);

echo $response->getBody();
```

```csharp
var client = new RestClient("https://openrouter.ai/api/v1/auth/keys");
var request = new RestRequest(Method.POST);
request.AddHeader("Content-Type", "application/json");
request.AddParameter("application/json", "{\n \"code\": \"string\"\n}",
ParameterType.RequestBody);
IRestResponse response = client.Execute(request);
```

https://openrouter.ai/docs/llms-full.txt 85/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

```swift
import Foundation

let headers = ["Content-Type": "application/json"]


let parameters = ["code": "string"] as [String : Any]

let postData = JSONSerialization.data(withJSONObject: parameters, options: [])

let request = NSMutableURLRequest(url: NSURL(string: "https://openrouter.ai/api/v1/auth/keys")! as


URL,
cachePolicy: .useProtocolCachePolicy,
timeoutInterval: 10.0)
request.httpMethod = "POST"
request.allHTTPHeaderFields = headers
request.httpBody = postData as Data

let session = URLSession.shared


let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response,
error) -> Void in
if (error != nil) {
print(error as Any)
} else {
let httpResponse = response as? HTTPURLResponse
print(httpResponse)
}
})

dataTask.resume()
```

```shell
curl -X POST https://openrouter.ai/api/v1/auth/keys \
-H "Content-Type: application/json" \
-d '{
"code": "string"
}'
```

```python
import requests

url = "https://openrouter.ai/api/v1/auth/keys"

payload = { "code": "string" }


headers = {"Content-Type": "application/json"}

response = requests.post(url, json=payload, headers=headers)

print(response.json())
```

```javascript
const url = 'https://openrouter.ai/api/v1/auth/keys';
const options = {
method: 'POST',
headers: {'Content-Type': 'application/json'},
body: '{"code":"string"}'
};

try {
const response = await fetch(url, options);
const data = await response.json();
console.log(data);
} catch (error) {
console.error(error);
}
```

https://openrouter.ai/docs/llms-full.txt 86/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
```go
package main

import (
"fmt"
"strings"
"net/http"
"io"
)

func main() {

url := "https://openrouter.ai/api/v1/auth/keys"

payload := strings.NewReader("{\n \"code\": \"string\"\n}")

req, _ := http.NewRequest("POST", url, payload)

req.Header.Add("Content-Type", "application/json")

res, _ := http.DefaultClient.Do(req)

defer res.Body.Close()
body, _ := io.ReadAll(res.Body)

fmt.Println(res)
fmt.Println(string(body))

}
```

```ruby
require 'uri'
require 'net/http'

url = URI("https://openrouter.ai/api/v1/auth/keys")

http = Net::HTTP.new(url.host, url.port)


http.use_ssl = true

request = Net::HTTP::Post.new(url)
request["Content-Type"] = 'application/json'
request.body = "{\n \"code\": \"string\"\n}"

response = http.request(request)
puts response.read_body
```

```java
HttpResponse<String> response = Unirest.post("https://openrouter.ai/api/v1/auth/keys")
.header("Content-Type", "application/json")
.body("{\n \"code\": \"string\"\n}")
.asString();
```

```php
<?php

$client = new \GuzzleHttp\Client();

$response = $client->request('POST', 'https://openrouter.ai/api/v1/auth/keys', [


'body' => '{
"code": "string"
}',
'headers' => [
'Content-Type' => 'application/json',
],
]);

https://openrouter.ai/docs/llms-full.txt 87/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
echo $response->getBody();
```

```csharp
var client = new RestClient("https://openrouter.ai/api/v1/auth/keys");
var request = new RestRequest(Method.POST);
request.AddHeader("Content-Type", "application/json");
request.AddParameter("application/json", "{\n \"code\": \"string\"\n}",
ParameterType.RequestBody);
IRestResponse response = client.Execute(request);
```

```swift
import Foundation

let headers = ["Content-Type": "application/json"]


let parameters = ["code": "string"] as [String : Any]

let postData = JSONSerialization.data(withJSONObject: parameters, options: [])

let request = NSMutableURLRequest(url: NSURL(string: "https://openrouter.ai/api/v1/auth/keys")! as


URL,
cachePolicy: .useProtocolCachePolicy,
timeoutInterval: 10.0)
request.httpMethod = "POST"
request.allHTTPHeaderFields = headers
request.httpBody = postData as Data

let session = URLSession.shared


let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response,
error) -> Void in
if (error != nil) {
print(error as Any)
} else {
let httpResponse = response as? HTTPURLResponse
print(httpResponse)
}
})

dataTask.resume()
```

```shell
curl -X POST https://openrouter.ai/api/v1/auth/keys \
-H "Content-Type: application/json" \
-d '{
"code": "string"
}'
```

```python
import requests

url = "https://openrouter.ai/api/v1/auth/keys"

payload = { "code": "string" }


headers = {"Content-Type": "application/json"}

response = requests.post(url, json=payload, headers=headers)

print(response.json())
```

```javascript
const url = 'https://openrouter.ai/api/v1/auth/keys';
const options = {
method: 'POST',
headers: {'Content-Type': 'application/json'},
body: '{"code":"string"}'

https://openrouter.ai/docs/llms-full.txt 88/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
};

try {
const response = await fetch(url, options);
const data = await response.json();
console.log(data);
} catch (error) {
console.error(error);
}
```

```go
package main

import (
"fmt"
"strings"
"net/http"
"io"
)

func main() {

url := "https://openrouter.ai/api/v1/auth/keys"

payload := strings.NewReader("{\n \"code\": \"string\"\n}")

req, _ := http.NewRequest("POST", url, payload)

req.Header.Add("Content-Type", "application/json")

res, _ := http.DefaultClient.Do(req)

defer res.Body.Close()
body, _ := io.ReadAll(res.Body)

fmt.Println(res)
fmt.Println(string(body))

}
```

```ruby
require 'uri'
require 'net/http'

url = URI("https://openrouter.ai/api/v1/auth/keys")

http = Net::HTTP.new(url.host, url.port)


http.use_ssl = true

request = Net::HTTP::Post.new(url)
request["Content-Type"] = 'application/json'
request.body = "{\n \"code\": \"string\"\n}"

response = http.request(request)
puts response.read_body
```

```java
HttpResponse<String> response = Unirest.post("https://openrouter.ai/api/v1/auth/keys")
.header("Content-Type", "application/json")
.body("{\n \"code\": \"string\"\n}")
.asString();
```

```php
<?php

https://openrouter.ai/docs/llms-full.txt 89/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
$client = new \GuzzleHttp\Client();

$response = $client->request('POST', 'https://openrouter.ai/api/v1/auth/keys', [


'body' => '{
"code": "string"
}',
'headers' => [
'Content-Type' => 'application/json',
],
]);

echo $response->getBody();
```

```csharp
var client = new RestClient("https://openrouter.ai/api/v1/auth/keys");
var request = new RestRequest(Method.POST);
request.AddHeader("Content-Type", "application/json");
request.AddParameter("application/json", "{\n \"code\": \"string\"\n}",
ParameterType.RequestBody);
IRestResponse response = client.Execute(request);
```

```swift
import Foundation

let headers = ["Content-Type": "application/json"]


let parameters = ["code": "string"] as [String : Any]

let postData = JSONSerialization.data(withJSONObject: parameters, options: [])

let request = NSMutableURLRequest(url: NSURL(string: "https://openrouter.ai/api/v1/auth/keys")! as


URL,
cachePolicy: .useProtocolCachePolicy,
timeoutInterval: 10.0)
request.httpMethod = "POST"
request.allHTTPHeaderFields = headers
request.httpBody = postData as Data

let session = URLSession.shared


let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response,
error) -> Void in
if (error != nil) {
print(error as Any)
} else {
let httpResponse = response as? HTTPURLResponse
print(httpResponse)
}
})

dataTask.resume()
```

# Get current API key

```http
GET https://openrouter.ai/api/v1/key
```

Get information on the API key associated with the current authentication session

## Response Body

- 200: Successfully retrieved API key information


- 401: Unauthorized - API key is required
- 405: Method Not Allowed - Only GET method is supported
- 500: Internal server error

https://openrouter.ai/docs/llms-full.txt 90/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

## Examples

```shell
curl https://openrouter.ai/api/v1/key \
-H "Authorization: Bearer <token>"
```

```python
import requests

url = "https://openrouter.ai/api/v1/key"

headers = {"Authorization": "Bearer <token>"}

response = requests.get(url, headers=headers)

print(response.json())
```

```javascript
const url = 'https://openrouter.ai/api/v1/key';
const options = {method: 'GET', headers: {Authorization: 'Bearer <token>'}};

try {
const response = await fetch(url, options);
const data = await response.json();
console.log(data);
} catch (error) {
console.error(error);
}
```

```go
package main

import (
"fmt"
"net/http"
"io"
)

func main() {

url := "https://openrouter.ai/api/v1/key"

req, _ := http.NewRequest("GET", url, nil)

req.Header.Add("Authorization", "Bearer <token>")

res, _ := http.DefaultClient.Do(req)

defer res.Body.Close()
body, _ := io.ReadAll(res.Body)

fmt.Println(res)
fmt.Println(string(body))

}
```

```ruby
require 'uri'
require 'net/http'

url = URI("https://openrouter.ai/api/v1/key")

http = Net::HTTP.new(url.host, url.port)


http.use_ssl = true

https://openrouter.ai/docs/llms-full.txt 91/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

request = Net::HTTP::Get.new(url)
request["Authorization"] = 'Bearer <token>'

response = http.request(request)
puts response.read_body
```

```java
HttpResponse<String> response = Unirest.get("https://openrouter.ai/api/v1/key")
.header("Authorization", "Bearer <token>")
.asString();
```

```php
<?php

$client = new \GuzzleHttp\Client();

$response = $client->request('GET', 'https://openrouter.ai/api/v1/key', [


'headers' => [
'Authorization' => 'Bearer <token>',
],
]);

echo $response->getBody();
```

```csharp
var client = new RestClient("https://openrouter.ai/api/v1/key");
var request = new RestRequest(Method.GET);
request.AddHeader("Authorization", "Bearer <token>");
IRestResponse response = client.Execute(request);
```

```swift
import Foundation

let headers = ["Authorization": "Bearer <token>"]

let request = NSMutableURLRequest(url: NSURL(string: "https://openrouter.ai/api/v1/key")! as URL,


cachePolicy: .useProtocolCachePolicy,
timeoutInterval: 10.0)
request.httpMethod = "GET"
request.allHTTPHeaderFields = headers

let session = URLSession.shared


let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response,
error) -> Void in
if (error != nil) {
print(error as Any)
} else {
let httpResponse = response as? HTTPURLResponse
print(httpResponse)
}
})

dataTask.resume()
```

```shell
curl https://openrouter.ai/api/v1/key \
-H "Authorization: Bearer <token>"
```

```python
import requests

url = "https://openrouter.ai/api/v1/key"

https://openrouter.ai/docs/llms-full.txt 92/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

headers = {"Authorization": "Bearer <token>"}

response = requests.get(url, headers=headers)

print(response.json())
```

```javascript
const url = 'https://openrouter.ai/api/v1/key';
const options = {method: 'GET', headers: {Authorization: 'Bearer <token>'}};

try {
const response = await fetch(url, options);
const data = await response.json();
console.log(data);
} catch (error) {
console.error(error);
}
```

```go
package main

import (
"fmt"
"net/http"
"io"
)

func main() {

url := "https://openrouter.ai/api/v1/key"

req, _ := http.NewRequest("GET", url, nil)

req.Header.Add("Authorization", "Bearer <token>")

res, _ := http.DefaultClient.Do(req)

defer res.Body.Close()
body, _ := io.ReadAll(res.Body)

fmt.Println(res)
fmt.Println(string(body))

}
```

```ruby
require 'uri'
require 'net/http'

url = URI("https://openrouter.ai/api/v1/key")

http = Net::HTTP.new(url.host, url.port)


http.use_ssl = true

request = Net::HTTP::Get.new(url)
request["Authorization"] = 'Bearer <token>'

response = http.request(request)
puts response.read_body
```

```java
HttpResponse<String> response = Unirest.get("https://openrouter.ai/api/v1/key")
.header("Authorization", "Bearer <token>")
.asString();

https://openrouter.ai/docs/llms-full.txt 93/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
```

```php
<?php

$client = new \GuzzleHttp\Client();

$response = $client->request('GET', 'https://openrouter.ai/api/v1/key', [


'headers' => [
'Authorization' => 'Bearer <token>',
],
]);

echo $response->getBody();
```

```csharp
var client = new RestClient("https://openrouter.ai/api/v1/key");
var request = new RestRequest(Method.GET);
request.AddHeader("Authorization", "Bearer <token>");
IRestResponse response = client.Execute(request);
```

```swift
import Foundation

let headers = ["Authorization": "Bearer <token>"]

let request = NSMutableURLRequest(url: NSURL(string: "https://openrouter.ai/api/v1/key")! as URL,


cachePolicy: .useProtocolCachePolicy,
timeoutInterval: 10.0)
request.httpMethod = "GET"
request.allHTTPHeaderFields = headers

let session = URLSession.shared


let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response,
error) -> Void in
if (error != nil) {
print(error as Any)
} else {
let httpResponse = response as? HTTPURLResponse
print(httpResponse)
}
})

dataTask.resume()
```

```shell
curl https://openrouter.ai/api/v1/key \
-H "Authorization: Bearer <token>"
```

```python
import requests

url = "https://openrouter.ai/api/v1/key"

headers = {"Authorization": "Bearer <token>"}

response = requests.get(url, headers=headers)

print(response.json())
```

```javascript
const url = 'https://openrouter.ai/api/v1/key';
const options = {method: 'GET', headers: {Authorization: 'Bearer <token>'}};

https://openrouter.ai/docs/llms-full.txt 94/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
try {
const response = await fetch(url, options);
const data = await response.json();
console.log(data);
} catch (error) {
console.error(error);
}
```

```go
package main

import (
"fmt"
"net/http"
"io"
)

func main() {

url := "https://openrouter.ai/api/v1/key"

req, _ := http.NewRequest("GET", url, nil)

req.Header.Add("Authorization", "Bearer <token>")

res, _ := http.DefaultClient.Do(req)

defer res.Body.Close()
body, _ := io.ReadAll(res.Body)

fmt.Println(res)
fmt.Println(string(body))

}
```

```ruby
require 'uri'
require 'net/http'

url = URI("https://openrouter.ai/api/v1/key")

http = Net::HTTP.new(url.host, url.port)


http.use_ssl = true

request = Net::HTTP::Get.new(url)
request["Authorization"] = 'Bearer <token>'

response = http.request(request)
puts response.read_body
```

```java
HttpResponse<String> response = Unirest.get("https://openrouter.ai/api/v1/key")
.header("Authorization", "Bearer <token>")
.asString();
```

```php
<?php

$client = new \GuzzleHttp\Client();

$response = $client->request('GET', 'https://openrouter.ai/api/v1/key', [


'headers' => [
'Authorization' => 'Bearer <token>',
],
]);

https://openrouter.ai/docs/llms-full.txt 95/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

echo $response->getBody();
```

```csharp
var client = new RestClient("https://openrouter.ai/api/v1/key");
var request = new RestRequest(Method.GET);
request.AddHeader("Authorization", "Bearer <token>");
IRestResponse response = client.Execute(request);
```

```swift
import Foundation

let headers = ["Authorization": "Bearer <token>"]

let request = NSMutableURLRequest(url: NSURL(string: "https://openrouter.ai/api/v1/key")! as URL,


cachePolicy: .useProtocolCachePolicy,
timeoutInterval: 10.0)
request.httpMethod = "GET"
request.allHTTPHeaderFields = headers

let session = URLSession.shared


let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response,
error) -> Void in
if (error != nil) {
print(error as Any)
} else {
let httpResponse = response as? HTTPURLResponse
print(httpResponse)
}
})

dataTask.resume()
```

```shell
curl https://openrouter.ai/api/v1/key \
-H "Authorization: Bearer <token>"
```

```python
import requests

url = "https://openrouter.ai/api/v1/key"

headers = {"Authorization": "Bearer <token>"}

response = requests.get(url, headers=headers)

print(response.json())
```

```javascript
const url = 'https://openrouter.ai/api/v1/key';
const options = {method: 'GET', headers: {Authorization: 'Bearer <token>'}};

try {
const response = await fetch(url, options);
const data = await response.json();
console.log(data);
} catch (error) {
console.error(error);
}
```

```go
package main

https://openrouter.ai/docs/llms-full.txt 96/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
import (
"fmt"
"net/http"
"io"
)

func main() {

url := "https://openrouter.ai/api/v1/key"

req, _ := http.NewRequest("GET", url, nil)

req.Header.Add("Authorization", "Bearer <token>")

res, _ := http.DefaultClient.Do(req)

defer res.Body.Close()
body, _ := io.ReadAll(res.Body)

fmt.Println(res)
fmt.Println(string(body))

}
```

```ruby
require 'uri'
require 'net/http'

url = URI("https://openrouter.ai/api/v1/key")

http = Net::HTTP.new(url.host, url.port)


http.use_ssl = true

request = Net::HTTP::Get.new(url)
request["Authorization"] = 'Bearer <token>'

response = http.request(request)
puts response.read_body
```

```java
HttpResponse<String> response = Unirest.get("https://openrouter.ai/api/v1/key")
.header("Authorization", "Bearer <token>")
.asString();
```

```php
<?php

$client = new \GuzzleHttp\Client();

$response = $client->request('GET', 'https://openrouter.ai/api/v1/key', [


'headers' => [
'Authorization' => 'Bearer <token>',
],
]);

echo $response->getBody();
```

```csharp
var client = new RestClient("https://openrouter.ai/api/v1/key");
var request = new RestRequest(Method.GET);
request.AddHeader("Authorization", "Bearer <token>");
IRestResponse response = client.Execute(request);
```

```swift

https://openrouter.ai/docs/llms-full.txt 97/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
import Foundation

let headers = ["Authorization": "Bearer <token>"]

let request = NSMutableURLRequest(url: NSURL(string: "https://openrouter.ai/api/v1/key")! as URL,


cachePolicy: .useProtocolCachePolicy,
timeoutInterval: 10.0)
request.httpMethod = "GET"
request.allHTTPHeaderFields = headers

let session = URLSession.shared


let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response,
error) -> Void in
if (error != nil) {
print(error as Any)
} else {
let httpResponse = response as? HTTPURLResponse
print(httpResponse)
}
})

dataTask.resume()
```

# List API keys

```http
GET https://openrouter.ai/api/v1/keys
```

Returns a list of all API keys associated with the account. Requires a Provisioning API key.

## Query Parameters

- Offset (optional): Offset for the API keys


- IncludeDisabled (optional): Whether to include disabled API keys in the response

## Response Body

- 200: List of API keys

## Examples

```shell
curl https://openrouter.ai/api/v1/keys \
-H "Authorization: Bearer <token>"
```

```python
import requests

url = "https://openrouter.ai/api/v1/keys"

headers = {"Authorization": "Bearer <token>"}

response = requests.get(url, headers=headers)

print(response.json())
```

```javascript
const url = 'https://openrouter.ai/api/v1/keys';
const options = {method: 'GET', headers: {Authorization: 'Bearer <token>'}};

try {
const response = await fetch(url, options);
const data = await response.json();

https://openrouter.ai/docs/llms-full.txt 98/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
console.log(data);
} catch (error) {
console.error(error);
}
```

```go
package main

import (
"fmt"
"net/http"
"io"
)

func main() {

url := "https://openrouter.ai/api/v1/keys"

req, _ := http.NewRequest("GET", url, nil)

req.Header.Add("Authorization", "Bearer <token>")

res, _ := http.DefaultClient.Do(req)

defer res.Body.Close()
body, _ := io.ReadAll(res.Body)

fmt.Println(res)
fmt.Println(string(body))

}
```

```ruby
require 'uri'
require 'net/http'

url = URI("https://openrouter.ai/api/v1/keys")

http = Net::HTTP.new(url.host, url.port)


http.use_ssl = true

request = Net::HTTP::Get.new(url)
request["Authorization"] = 'Bearer <token>'

response = http.request(request)
puts response.read_body
```

```java
HttpResponse<String> response = Unirest.get("https://openrouter.ai/api/v1/keys")
.header("Authorization", "Bearer <token>")
.asString();
```

```php
<?php

$client = new \GuzzleHttp\Client();

$response = $client->request('GET', 'https://openrouter.ai/api/v1/keys', [


'headers' => [
'Authorization' => 'Bearer <token>',
],
]);

echo $response->getBody();
```

https://openrouter.ai/docs/llms-full.txt 99/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

```csharp
var client = new RestClient("https://openrouter.ai/api/v1/keys");
var request = new RestRequest(Method.GET);
request.AddHeader("Authorization", "Bearer <token>");
IRestResponse response = client.Execute(request);
```

```swift
import Foundation

let headers = ["Authorization": "Bearer <token>"]

let request = NSMutableURLRequest(url: NSURL(string: "https://openrouter.ai/api/v1/keys")! as URL,


cachePolicy: .useProtocolCachePolicy,
timeoutInterval: 10.0)
request.httpMethod = "GET"
request.allHTTPHeaderFields = headers

let session = URLSession.shared


let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response,
error) -> Void in
if (error != nil) {
print(error as Any)
} else {
let httpResponse = response as? HTTPURLResponse
print(httpResponse)
}
})

dataTask.resume()
```

# Create API key

```http
POST https://openrouter.ai/api/v1/keys
Content-Type: application/json
```

Creates a new API key. Requires a Provisioning API key.

## Response Body

- 200: Created API key

## Examples

```shell
curl -X POST https://openrouter.ai/api/v1/keys \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{
"name": "name"
}'
```

```python
import requests

url = "https://openrouter.ai/api/v1/keys"

payload = { "name": "name" }


headers = {
"Authorization": "Bearer <token>",
"Content-Type": "application/json"
}

https://openrouter.ai/docs/llms-full.txt 100/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

response = requests.post(url, json=payload, headers=headers)

print(response.json())
```

```javascript
const url = 'https://openrouter.ai/api/v1/keys';
const options = {
method: 'POST',
headers: {Authorization: 'Bearer <token>', 'Content-Type': 'application/json'},
body: '{"name":"name"}'
};

try {
const response = await fetch(url, options);
const data = await response.json();
console.log(data);
} catch (error) {
console.error(error);
}
```

```go
package main

import (
"fmt"
"strings"
"net/http"
"io"
)

func main() {

url := "https://openrouter.ai/api/v1/keys"

payload := strings.NewReader("{\n \"name\": \"name\"\n}")

req, _ := http.NewRequest("POST", url, payload)

req.Header.Add("Authorization", "Bearer <token>")


req.Header.Add("Content-Type", "application/json")

res, _ := http.DefaultClient.Do(req)

defer res.Body.Close()
body, _ := io.ReadAll(res.Body)

fmt.Println(res)
fmt.Println(string(body))

}
```

```ruby
require 'uri'
require 'net/http'

url = URI("https://openrouter.ai/api/v1/keys")

http = Net::HTTP.new(url.host, url.port)


http.use_ssl = true

request = Net::HTTP::Post.new(url)
request["Authorization"] = 'Bearer <token>'
request["Content-Type"] = 'application/json'
request.body = "{\n \"name\": \"name\"\n}"

https://openrouter.ai/docs/llms-full.txt 101/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
response = http.request(request)
puts response.read_body
```

```java
HttpResponse<String> response = Unirest.post("https://openrouter.ai/api/v1/keys")
.header("Authorization", "Bearer <token>")
.header("Content-Type", "application/json")
.body("{\n \"name\": \"name\"\n}")
.asString();
```

```php
<?php

$client = new \GuzzleHttp\Client();

$response = $client->request('POST', 'https://openrouter.ai/api/v1/keys', [


'body' => '{
"name": "name"
}',
'headers' => [
'Authorization' => 'Bearer <token>',
'Content-Type' => 'application/json',
],
]);

echo $response->getBody();
```

```csharp
var client = new RestClient("https://openrouter.ai/api/v1/keys");
var request = new RestRequest(Method.POST);
request.AddHeader("Authorization", "Bearer <token>");
request.AddHeader("Content-Type", "application/json");
request.AddParameter("application/json", "{\n \"name\": \"name\"\n}", ParameterType.RequestBody);
IRestResponse response = client.Execute(request);
```

```swift
import Foundation

let headers = [
"Authorization": "Bearer <token>",
"Content-Type": "application/json"
]
let parameters = ["name": "name"] as [String : Any]

let postData = JSONSerialization.data(withJSONObject: parameters, options: [])

let request = NSMutableURLRequest(url: NSURL(string: "https://openrouter.ai/api/v1/keys")! as URL,


cachePolicy: .useProtocolCachePolicy,
timeoutInterval: 10.0)
request.httpMethod = "POST"
request.allHTTPHeaderFields = headers
request.httpBody = postData as Data

let session = URLSession.shared


let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response,
error) -> Void in
if (error != nil) {
print(error as Any)
} else {
let httpResponse = response as? HTTPURLResponse
print(httpResponse)
}
})

dataTask.resume()

https://openrouter.ai/docs/llms-full.txt 102/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
```

# Get API key

```http
GET https://openrouter.ai/api/v1/keys/{hash}
```

Returns details about a specific API key. Requires a Provisioning API key.

## Path Parameters

- Hash (required): The hash of the API key

## Response Body

- 200: API key details

## Examples

```shell
curl https://openrouter.ai/api/v1/keys/hash \
-H "Authorization: Bearer <token>"
```

```python
import requests

url = "https://openrouter.ai/api/v1/keys/hash"

headers = {"Authorization": "Bearer <token>"}

response = requests.get(url, headers=headers)

print(response.json())
```

```javascript
const url = 'https://openrouter.ai/api/v1/keys/hash';
const options = {method: 'GET', headers: {Authorization: 'Bearer <token>'}};

try {
const response = await fetch(url, options);
const data = await response.json();
console.log(data);
} catch (error) {
console.error(error);
}
```

```go
package main

import (
"fmt"
"net/http"
"io"
)

func main() {

url := "https://openrouter.ai/api/v1/keys/hash"

req, _ := http.NewRequest("GET", url, nil)

req.Header.Add("Authorization", "Bearer <token>")

https://openrouter.ai/docs/llms-full.txt 103/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
res, _ := http.DefaultClient.Do(req)

defer res.Body.Close()
body, _ := io.ReadAll(res.Body)

fmt.Println(res)
fmt.Println(string(body))

}
```

```ruby
require 'uri'
require 'net/http'

url = URI("https://openrouter.ai/api/v1/keys/hash")

http = Net::HTTP.new(url.host, url.port)


http.use_ssl = true

request = Net::HTTP::Get.new(url)
request["Authorization"] = 'Bearer <token>'

response = http.request(request)
puts response.read_body
```

```java
HttpResponse<String> response = Unirest.get("https://openrouter.ai/api/v1/keys/hash")
.header("Authorization", "Bearer <token>")
.asString();
```

```php
<?php

$client = new \GuzzleHttp\Client();

$response = $client->request('GET', 'https://openrouter.ai/api/v1/keys/hash', [


'headers' => [
'Authorization' => 'Bearer <token>',
],
]);

echo $response->getBody();
```

```csharp
var client = new RestClient("https://openrouter.ai/api/v1/keys/hash");
var request = new RestRequest(Method.GET);
request.AddHeader("Authorization", "Bearer <token>");
IRestResponse response = client.Execute(request);
```

```swift
import Foundation

let headers = ["Authorization": "Bearer <token>"]

let request = NSMutableURLRequest(url: NSURL(string: "https://openrouter.ai/api/v1/keys/hash")! as


URL,
cachePolicy: .useProtocolCachePolicy,
timeoutInterval: 10.0)
request.httpMethod = "GET"
request.allHTTPHeaderFields = headers

let session = URLSession.shared


let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response,
error) -> Void in

https://openrouter.ai/docs/llms-full.txt 104/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
if (error != nil) {
print(error as Any)
} else {
let httpResponse = response as? HTTPURLResponse
print(httpResponse)
}
})

dataTask.resume()
```

# Delete API key

```http
DELETE https://openrouter.ai/api/v1/keys/{hash}
```

Deletes an API key. Requires a Provisioning API key.

## Path Parameters

- Hash (required): The hash of the API key

## Response Body

- 200: Successfully deleted API key

## Examples

```shell
curl -X DELETE https://openrouter.ai/api/v1/keys/hash \
-H "Authorization: Bearer <token>"
```

```python
import requests

url = "https://openrouter.ai/api/v1/keys/hash"

headers = {"Authorization": "Bearer <token>"}

response = requests.delete(url, headers=headers)

print(response.json())
```

```javascript
const url = 'https://openrouter.ai/api/v1/keys/hash';
const options = {method: 'DELETE', headers: {Authorization: 'Bearer <token>'}};

try {
const response = await fetch(url, options);
const data = await response.json();
console.log(data);
} catch (error) {
console.error(error);
}
```

```go
package main

import (
"fmt"
"net/http"
"io"
)

https://openrouter.ai/docs/llms-full.txt 105/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

func main() {

url := "https://openrouter.ai/api/v1/keys/hash"

req, _ := http.NewRequest("DELETE", url, nil)

req.Header.Add("Authorization", "Bearer <token>")

res, _ := http.DefaultClient.Do(req)

defer res.Body.Close()
body, _ := io.ReadAll(res.Body)

fmt.Println(res)
fmt.Println(string(body))

}
```

```ruby
require 'uri'
require 'net/http'

url = URI("https://openrouter.ai/api/v1/keys/hash")

http = Net::HTTP.new(url.host, url.port)


http.use_ssl = true

request = Net::HTTP::Delete.new(url)
request["Authorization"] = 'Bearer <token>'

response = http.request(request)
puts response.read_body
```

```java
HttpResponse<String> response = Unirest.delete("https://openrouter.ai/api/v1/keys/hash")
.header("Authorization", "Bearer <token>")
.asString();
```

```php
<?php

$client = new \GuzzleHttp\Client();

$response = $client->request('DELETE', 'https://openrouter.ai/api/v1/keys/hash', [


'headers' => [
'Authorization' => 'Bearer <token>',
],
]);

echo $response->getBody();
```

```csharp
var client = new RestClient("https://openrouter.ai/api/v1/keys/hash");
var request = new RestRequest(Method.DELETE);
request.AddHeader("Authorization", "Bearer <token>");
IRestResponse response = client.Execute(request);
```

```swift
import Foundation

let headers = ["Authorization": "Bearer <token>"]

let request = NSMutableURLRequest(url: NSURL(string: "https://openrouter.ai/api/v1/keys/hash")! as

https://openrouter.ai/docs/llms-full.txt 106/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
URL,
cachePolicy: .useProtocolCachePolicy,
timeoutInterval: 10.0)
request.httpMethod = "DELETE"
request.allHTTPHeaderFields = headers

let session = URLSession.shared


let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response,
error) -> Void in
if (error != nil) {
print(error as Any)
} else {
let httpResponse = response as? HTTPURLResponse
print(httpResponse)
}
})

dataTask.resume()
```

# Update API key

```http
PATCH https://openrouter.ai/api/v1/keys/{hash}
Content-Type: application/json
```

Updates an existing API key. Requires a Provisioning API key.

## Path Parameters

- Hash (required): The hash of the API key

## Response Body

- 200: Updated API key

## Examples

```shell
curl -X PATCH https://openrouter.ai/api/v1/keys/hash \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{}'
```

```python
import requests

url = "https://openrouter.ai/api/v1/keys/hash"

payload = {}
headers = {
"Authorization": "Bearer <token>",
"Content-Type": "application/json"
}

response = requests.patch(url, json=payload, headers=headers)

print(response.json())
```

```javascript
const url = 'https://openrouter.ai/api/v1/keys/hash';
const options = {
method: 'PATCH',
headers: {Authorization: 'Bearer <token>', 'Content-Type': 'application/json'},

https://openrouter.ai/docs/llms-full.txt 107/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
body: '{}'
};

try {
const response = await fetch(url, options);
const data = await response.json();
console.log(data);
} catch (error) {
console.error(error);
}
```

```go
package main

import (
"fmt"
"strings"
"net/http"
"io"
)

func main() {

url := "https://openrouter.ai/api/v1/keys/hash"

payload := strings.NewReader("{}")

req, _ := http.NewRequest("PATCH", url, payload)

req.Header.Add("Authorization", "Bearer <token>")


req.Header.Add("Content-Type", "application/json")

res, _ := http.DefaultClient.Do(req)

defer res.Body.Close()
body, _ := io.ReadAll(res.Body)

fmt.Println(res)
fmt.Println(string(body))

}
```

```ruby
require 'uri'
require 'net/http'

url = URI("https://openrouter.ai/api/v1/keys/hash")

http = Net::HTTP.new(url.host, url.port)


http.use_ssl = true

request = Net::HTTP::Patch.new(url)
request["Authorization"] = 'Bearer <token>'
request["Content-Type"] = 'application/json'
request.body = "{}"

response = http.request(request)
puts response.read_body
```

```java
HttpResponse<String> response = Unirest.patch("https://openrouter.ai/api/v1/keys/hash")
.header("Authorization", "Bearer <token>")
.header("Content-Type", "application/json")
.body("{}")
.asString();
```

https://openrouter.ai/docs/llms-full.txt 108/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

```php
<?php

$client = new \GuzzleHttp\Client();

$response = $client->request('PATCH', 'https://openrouter.ai/api/v1/keys/hash', [


'body' => '{}',
'headers' => [
'Authorization' => 'Bearer <token>',
'Content-Type' => 'application/json',
],
]);

echo $response->getBody();
```

```csharp
var client = new RestClient("https://openrouter.ai/api/v1/keys/hash");
var request = new RestRequest(Method.PATCH);
request.AddHeader("Authorization", "Bearer <token>");
request.AddHeader("Content-Type", "application/json");
request.AddParameter("application/json", "{}", ParameterType.RequestBody);
IRestResponse response = client.Execute(request);
```

```swift
import Foundation

let headers = [
"Authorization": "Bearer <token>",
"Content-Type": "application/json"
]
let parameters = [] as [String : Any]

let postData = JSONSerialization.data(withJSONObject: parameters, options: [])

let request = NSMutableURLRequest(url: NSURL(string: "https://openrouter.ai/api/v1/keys/hash")! as


URL,
cachePolicy: .useProtocolCachePolicy,
timeoutInterval: 10.0)
request.httpMethod = "PATCH"
request.allHTTPHeaderFields = headers
request.httpBody = postData as Data

let session = URLSession.shared


let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response,
error) -> Void in
if (error != nil) {
print(error as Any)
} else {
let httpResponse = response as? HTTPURLResponse
print(httpResponse)
}
})

dataTask.resume()
```

# BYOK

> Learn how to use your existing AI provider keys with OpenRouter. Integrate your own API keys
while leveraging OpenRouter's unified interface and features.

## Bring your own API Keys

OpenRouter supports both OpenRouter credits and the option to bring your own provider keys (BYOK).

When you use OpenRouter credits, your rate limits for each provider are managed by OpenRouter.

https://openrouter.ai/docs/llms-full.txt 109/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

Using provider keys enables direct control over rate limits and costs via your provider account.

Your provider keys are securely encrypted and used for all requests routed through the specified
provider.

Manage keys in your [account settings](/settings/integrations).

The cost of using custom provider keys on OpenRouter is **5% of what the same model/provider would
cost normally on OpenRouter** and will be deducted from your OpenRouter credits.

### Automatic Fallback

You can configure individual keys to act as fallbacks.

When "Use this key as a fallback" is enabled for a key, OpenRouter will prioritize using your
credits. If it hits a rate limit or encounters a failure, it will then retry with your key.

Conversely, if "Use this key as a fallback" is disabled for a key, OpenRouter will prioritize
using your key. If it hits a rate limit or encounters a failure, it will then retry with your
credits.

### Azure API Keys

To use Azure AI Services with OpenRouter, you'll need to provide your Azure API key configuration
in JSON format. Each key configuration requires the following fields:

```json
{
"model_slug": "the-openrouter-model-slug",
"endpoint_url": "https://<resource>.services.ai.azure.com/deployments/<model-
id>/chat/completions?api-version=<api-version>",
"api_key": "your-azure-api-key",
"model_id": "the-azure-model-id"
}
```

You can find these values in your Azure AI Services resource:

1. **endpoint\_url**: Navigate to your Azure AI Services resource in the Azure portal. In the
"Overview" section, you'll find your endpoint URL. Make sure to append `/chat/completions` to the
base URL. You can read more in the [Azure Foundry documentation](https://learn.microsoft.com/en-
us/azure/ai-foundry/model-inference/concepts/endpoints?tabs=python).

2. **api\_key**: In the same "Overview" section of your Azure AI Services resource, you can find
your API key under "Keys and Endpoint".

3. **model\_id**: This is the name of your model deployment in Azure AI Services.

4. **model\_slug**: This is the OpenRouter model identifier you want to use this key for.

Since Azure supports multiple model deployments, you can provide an array of configurations for
different models:

```json
[
{
"model_slug": "mistralai/mistral-large",
"endpoint_url": "https://example-project.openai.azure.com/openai/deployments/mistral-
large/chat/completions?api-version=2024-08-01-preview",
"api_key": "your-azure-api-key",
"model_id": "mistral-large"
},
{
"model_slug": "openai/gpt-4o",
"endpoint_url": "https://example-project.openai.azure.com/openai/deployments/gpt-
4o/chat/completions?api-version=2024-08-01-preview",
"api_key": "your-azure-api-key",
"model_id": "gpt-4o"

https://openrouter.ai/docs/llms-full.txt 110/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
}
]
```

Make sure to replace the url with your own project url. Also the url should end with
/chat/completions with the api version that you would like to use.

### AWS Bedrock API Keys

To use Amazon Bedrock with OpenRouter, you'll need to provide your AWS credentials in JSON format.
The configuration requires the following fields:

```json
{
"accessKeyId": "your-aws-access-key-id",
"secretAccessKey": "your-aws-secret-access-key",
"region": "your-aws-region"
}
```

You can find these values in your AWS account:

1. **accessKeyId**: This is your AWS Access Key ID. You can create or find your access keys in the
AWS Management Console under "Security Credentials" in your AWS account.

2. **secretAccessKey**: This is your AWS Secret Access Key, which is provided when you create an
access key.

3. **region**: The AWS region where your Amazon Bedrock models are deployed (e.g., "us-east-1",
"us-west-2").

Make sure your AWS IAM user or role has the necessary permissions to access Amazon Bedrock
services. At minimum, you'll need permissions for:

* `bedrock:InvokeModel`
* `bedrock:InvokeModelWithResponseStream` (for streaming responses)

Example IAM policy:

```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"bedrock:InvokeModel",
"bedrock:InvokeModelWithResponseStream"
],
"Resource": "*"
}
]
}
```

For enhanced security, we recommend creating dedicated IAM users with limited permissions
specifically for use with OpenRouter.

Learn more in the [AWS Bedrock Getting Started with the API]
(https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started-api.html) documentation,
[IAM Permissions Setup](https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html)
guide, or the [AWS Bedrock API Reference]
(https://docs.aws.amazon.com/bedrock/latest/APIReference/welcome.html).

# Crypto API

> Learn how to purchase OpenRouter credits using cryptocurrency. Complete guide to Coinbase
integration, supported chains, and automated credit purchases.

https://openrouter.ai/docs/llms-full.txt 111/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

You can purchase credits using cryptocurrency through our Coinbase integration. This can either
happen through the UI, on your [credits page](https://openrouter.ai/settings/credits), or through
our API as described below. While other forms of payment are possible, this guide specifically
shows how to pay with the chain's native token.

Headless credit purchases involve three steps:

1. Getting the calldata for a new credit purchase


2. Sending a transaction on-chain using that data
3. Detecting low account balance, and purchasing more

## Getting Credit Purchase Calldata

Make a POST request to `/api/v1/credits/coinbase` to create a new charge. You'll include the
amount of credits you want to purchase (in USD, up to \${maxCryptoDollarPurchase}), the address
you'll be sending the transaction from, and the EVM chain ID of the network you'll be sending on.

Currently, we only support the following chains (mainnet only):

* Ethereum ({SupportedChainIDs.Ethereum})
* Polygon ({SupportedChainIDs.Polygon})
* Base ({SupportedChainIDs.Base}) ***recommended***

```typescript
const response = await fetch('https://openrouter.ai/api/v1/credits/coinbase', {
method: 'POST',
headers: {
Authorization: 'Bearer <OPENROUTER_API_KEY>',
'Content-Type': 'application/json',
},
body: JSON.stringify({
amount: 10, // Target credit amount in USD
sender: '0x9a85CB3bfd494Ea3a8C9E50aA6a3c1a7E8BACE11',
chain_id: 8453,
}),
});
const responseJSON = await response.json();
```

The response includes the charge details and transaction data needed to execute the on-chain
payment:

```json
{
"data": {
"id": "...",
"created_at": "2024-01-01T00:00:00Z",
"expires_at": "2024-01-01T01:00:00Z",
"web3_data": {
"transfer_intent": {
"metadata": {
"chain_id": 8453,
"contract_address": "0x03059433bcdb6144624cc2443159d9445c32b7a8",
"sender": "0x9a85CB3bfd494Ea3a8C9E50aA6a3c1a7E8BACE11"
},
"call_data": {
"recipient_amount": "...",
"deadline": "...",
"recipient": "...",
"recipient_currency": "...",
"refund_destination": "...",
"fee_amount": "...",
"id": "...",
"operator": "...",
"signature": "...",
"prefix": "..."
}
}

https://openrouter.ai/docs/llms-full.txt 112/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
}
}
}
```

## Sending the Transaction

You can use [viem](https://viem.sh) (or another similar evm client) to execute the transaction on-
chain.

In this example, we'll be fulfilling the charge using the [swapAndTransferUniswapV3Native()]


(https://github.com/coinbase/commerce-onchain-payment-
protocol/blob/d891289bd1f41bb95f749af537f2b6a36b17f889/contracts/interfaces/ITransfers.sol#L168-
L171) function. Other methods of swapping are also available, and you can learn more by checking
out Coinbase's [onchain payment protocol here](https://github.com/coinbase/commerce-onchain-
payment-protocol/tree/master). Note, if you are trying to pay in a less common ERC-20, there is
added complexity in needing to make sure that there is sufficient liquidity in the pool to swap
the tokens.

```typescript
import { createPublicClient, createWalletClient, http, parseEther } from 'viem';
import { privateKeyToAccount } from 'viem/accounts';
import { base } from 'viem/chains';

// The ABI for Coinbase's onchain payment protocol


const abi = [
{
inputs: [
{
internalType: 'contract IUniversalRouter',
name: '_uniswap',
type: 'address',
},
{ internalType: 'contract Permit2', name: '_permit2', type: 'address' },
{ internalType: 'address', name: '_initialOperator', type: 'address' },
{
internalType: 'address',
name: '_initialFeeDestination',
type: 'address',
},
{
internalType: 'contract IWrappedNativeCurrency',
name: '_wrappedNativeCurrency',
type: 'address',
},
],
stateMutability: 'nonpayable',
type: 'constructor',
},
{ inputs: [], name: 'AlreadyProcessed', type: 'error' },
{ inputs: [], name: 'ExpiredIntent', type: 'error' },
{
inputs: [
{ internalType: 'address', name: 'attemptedCurrency', type: 'address' },
],
name: 'IncorrectCurrency',
type: 'error',
},
{ inputs: [], name: 'InexactTransfer', type: 'error' },
{
inputs: [{ internalType: 'uint256', name: 'difference', type: 'uint256' }],
name: 'InsufficientAllowance',
type: 'error',
},
{
inputs: [{ internalType: 'uint256', name: 'difference', type: 'uint256' }],
name: 'InsufficientBalance',
type: 'error',
},

https://openrouter.ai/docs/llms-full.txt 113/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
{
inputs: [{ internalType: 'int256', name: 'difference', type: 'int256' }],
name: 'InvalidNativeAmount',
type: 'error',
},
{ inputs: [], name: 'InvalidSignature', type: 'error' },
{ inputs: [], name: 'InvalidTransferDetails', type: 'error' },
{
inputs: [
{ internalType: 'address', name: 'recipient', type: 'address' },
{ internalType: 'uint256', name: 'amount', type: 'uint256' },
{ internalType: 'bool', name: 'isRefund', type: 'bool' },
{ internalType: 'bytes', name: 'data', type: 'bytes' },
],
name: 'NativeTransferFailed',
type: 'error',
},
{ inputs: [], name: 'NullRecipient', type: 'error' },
{ inputs: [], name: 'OperatorNotRegistered', type: 'error' },
{ inputs: [], name: 'PermitCallFailed', type: 'error' },
{
inputs: [{ internalType: 'bytes', name: 'reason', type: 'bytes' }],
name: 'SwapFailedBytes',
type: 'error',
},
{
inputs: [{ internalType: 'string', name: 'reason', type: 'string' }],
name: 'SwapFailedString',
type: 'error',
},
{
anonymous: false,
inputs: [
{
indexed: false,
internalType: 'address',
name: 'operator',
type: 'address',
},
{
indexed: false,
internalType: 'address',
name: 'feeDestination',
type: 'address',
},
],
name: 'OperatorRegistered',
type: 'event',
},
{
anonymous: false,
inputs: [
{
indexed: false,
internalType: 'address',
name: 'operator',
type: 'address',
},
],
name: 'OperatorUnregistered',
type: 'event',
},
{
anonymous: false,
inputs: [
{
indexed: true,
internalType: 'address',
name: 'previousOwner',

https://openrouter.ai/docs/llms-full.txt 114/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
type: 'address',
},
{
indexed: true,
internalType: 'address',
name: 'newOwner',
type: 'address',
},
],
name: 'OwnershipTransferred',
type: 'event',
},
{
anonymous: false,
inputs: [
{
indexed: false,
internalType: 'address',
name: 'account',
type: 'address',
},
],
name: 'Paused',
type: 'event',
},
{
anonymous: false,
inputs: [
{
indexed: true,
internalType: 'address',
name: 'operator',
type: 'address',
},
{ indexed: false, internalType: 'bytes16', name: 'id', type: 'bytes16' },
{
indexed: false,
internalType: 'address',
name: 'recipient',
type: 'address',
},
{
indexed: false,
internalType: 'address',
name: 'sender',
type: 'address',
},
{
indexed: false,
internalType: 'uint256',
name: 'spentAmount',
type: 'uint256',
},
{
indexed: false,
internalType: 'address',
name: 'spentCurrency',
type: 'address',
},
],
name: 'Transferred',
type: 'event',
},
{
anonymous: false,
inputs: [
{
indexed: false,
internalType: 'address',

https://openrouter.ai/docs/llms-full.txt 115/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
name: 'account',
type: 'address',
},
],
name: 'Unpaused',
type: 'event',
},
{
inputs: [],
name: 'owner',
outputs: [{ internalType: 'address', name: '', type: 'address' }],
stateMutability: 'view',
type: 'function',
},
{
inputs: [],
name: 'pause',
outputs: [],
stateMutability: 'nonpayable',
type: 'function',
},
{
inputs: [],
name: 'paused',
outputs: [{ internalType: 'bool', name: '', type: 'bool' }],
stateMutability: 'view',
type: 'function',
},
{
inputs: [],
name: 'permit2',
outputs: [{ internalType: 'contract Permit2', name: '', type: 'address' }],
stateMutability: 'view',
type: 'function',
},
{
inputs: [],
name: 'registerOperator',
outputs: [],
stateMutability: 'nonpayable',
type: 'function',
},
{
inputs: [
{ internalType: 'address', name: '_feeDestination', type: 'address' },
],
name: 'registerOperatorWithFeeDestination',
outputs: [],
stateMutability: 'nonpayable',
type: 'function',
},
{
inputs: [],
name: 'renounceOwnership',
outputs: [],
stateMutability: 'nonpayable',
type: 'function',
},
{
inputs: [{ internalType: 'address', name: 'newSweeper', type: 'address' }],
name: 'setSweeper',
outputs: [],
stateMutability: 'nonpayable',
type: 'function',
},
{
inputs: [
{
components: [

https://openrouter.ai/docs/llms-full.txt 116/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
{ internalType: 'uint256', name: 'recipientAmount', type: 'uint256' },
{ internalType: 'uint256', name: 'deadline', type: 'uint256' },
{
internalType: 'address payable',
name: 'recipient',
type: 'address',
},
{
internalType: 'address',
name: 'recipientCurrency',
type: 'address',
},
{
internalType: 'address',
name: 'refundDestination',
type: 'address',
},
{ internalType: 'uint256', name: 'feeAmount', type: 'uint256' },
{ internalType: 'bytes16', name: 'id', type: 'bytes16' },
{ internalType: 'address', name: 'operator', type: 'address' },
{ internalType: 'bytes', name: 'signature', type: 'bytes' },
{ internalType: 'bytes', name: 'prefix', type: 'bytes' },
],
internalType: 'struct TransferIntent',
name: '_intent',
type: 'tuple',
},
{
components: [
{ internalType: 'address', name: 'owner', type: 'address' },
{ internalType: 'bytes', name: 'signature', type: 'bytes' },
],
internalType: 'struct EIP2612SignatureTransferData',
name: '_signatureTransferData',
type: 'tuple',
},
],
name: 'subsidizedTransferToken',
outputs: [],
stateMutability: 'nonpayable',
type: 'function',
},
{
inputs: [
{
components: [
{ internalType: 'uint256', name: 'recipientAmount', type: 'uint256' },
{ internalType: 'uint256', name: 'deadline', type: 'uint256' },
{
internalType: 'address payable',
name: 'recipient',
type: 'address',
},
{
internalType: 'address',
name: 'recipientCurrency',
type: 'address',
},
{
internalType: 'address',
name: 'refundDestination',
type: 'address',
},
{ internalType: 'uint256', name: 'feeAmount', type: 'uint256' },
{ internalType: 'bytes16', name: 'id', type: 'bytes16' },
{ internalType: 'address', name: 'operator', type: 'address' },
{ internalType: 'bytes', name: 'signature', type: 'bytes' },
{ internalType: 'bytes', name: 'prefix', type: 'bytes' },
],

https://openrouter.ai/docs/llms-full.txt 117/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
internalType: 'struct TransferIntent',
name: '_intent',
type: 'tuple',
},
{ internalType: 'uint24', name: 'poolFeesTier', type: 'uint24' },
],
name: 'swapAndTransferUniswapV3Native',
outputs: [],
stateMutability: 'payable',
type: 'function',
},
{
inputs: [
{
components: [
{ internalType: 'uint256', name: 'recipientAmount', type: 'uint256' },
{ internalType: 'uint256', name: 'deadline', type: 'uint256' },
{
internalType: 'address payable',
name: 'recipient',
type: 'address',
},
{
internalType: 'address',
name: 'recipientCurrency',
type: 'address',
},
{
internalType: 'address',
name: 'refundDestination',
type: 'address',
},
{ internalType: 'uint256', name: 'feeAmount', type: 'uint256' },
{ internalType: 'bytes16', name: 'id', type: 'bytes16' },
{ internalType: 'address', name: 'operator', type: 'address' },
{ internalType: 'bytes', name: 'signature', type: 'bytes' },
{ internalType: 'bytes', name: 'prefix', type: 'bytes' },
],
internalType: 'struct TransferIntent',
name: '_intent',
type: 'tuple',
},
{
components: [
{
components: [
{
components: [
{ internalType: 'address', name: 'token', type: 'address' },
{ internalType: 'uint256', name: 'amount', type: 'uint256' },
],
internalType: 'struct ISignatureTransfer.TokenPermissions',
name: 'permitted',
type: 'tuple',
},
{ internalType: 'uint256', name: 'nonce', type: 'uint256' },
{ internalType: 'uint256', name: 'deadline', type: 'uint256' },
],
internalType: 'struct ISignatureTransfer.PermitTransferFrom',
name: 'permit',
type: 'tuple',
},
{
components: [
{ internalType: 'address', name: 'to', type: 'address' },
{
internalType: 'uint256',
name: 'requestedAmount',
type: 'uint256',

https://openrouter.ai/docs/llms-full.txt 118/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
},
],
internalType: 'struct ISignatureTransfer.SignatureTransferDetails',
name: 'transferDetails',
type: 'tuple',
},
{ internalType: 'bytes', name: 'signature', type: 'bytes' },
],
internalType: 'struct Permit2SignatureTransferData',
name: '_signatureTransferData',
type: 'tuple',
},
{ internalType: 'uint24', name: 'poolFeesTier', type: 'uint24' },
],
name: 'swapAndTransferUniswapV3Token',
outputs: [],
stateMutability: 'nonpayable',
type: 'function',
},
{
inputs: [
{
components: [
{ internalType: 'uint256', name: 'recipientAmount', type: 'uint256' },
{ internalType: 'uint256', name: 'deadline', type: 'uint256' },
{
internalType: 'address payable',
name: 'recipient',
type: 'address',
},
{
internalType: 'address',
name: 'recipientCurrency',
type: 'address',
},
{
internalType: 'address',
name: 'refundDestination',
type: 'address',
},
{ internalType: 'uint256', name: 'feeAmount', type: 'uint256' },
{ internalType: 'bytes16', name: 'id', type: 'bytes16' },
{ internalType: 'address', name: 'operator', type: 'address' },
{ internalType: 'bytes', name: 'signature', type: 'bytes' },
{ internalType: 'bytes', name: 'prefix', type: 'bytes' },
],
internalType: 'struct TransferIntent',
name: '_intent',
type: 'tuple',
},
{ internalType: 'address', name: '_tokenIn', type: 'address' },
{ internalType: 'uint256', name: 'maxWillingToPay', type: 'uint256' },
{ internalType: 'uint24', name: 'poolFeesTier', type: 'uint24' },
],
name: 'swapAndTransferUniswapV3TokenPreApproved',
outputs: [],
stateMutability: 'nonpayable',
type: 'function',
},
{
inputs: [
{ internalType: 'address payable', name: 'destination', type: 'address' },
],
name: 'sweepETH',
outputs: [],
stateMutability: 'nonpayable',
type: 'function',
},
{

https://openrouter.ai/docs/llms-full.txt 119/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
inputs: [
{ internalType: 'address payable', name: 'destination', type: 'address' },
{ internalType: 'uint256', name: 'amount', type: 'uint256' },
],
name: 'sweepETHAmount',
outputs: [],
stateMutability: 'nonpayable',
type: 'function',
},
{
inputs: [
{ internalType: 'address', name: '_token', type: 'address' },
{ internalType: 'address', name: 'destination', type: 'address' },
],
name: 'sweepToken',
outputs: [],
stateMutability: 'nonpayable',
type: 'function',
},
{
inputs: [
{ internalType: 'address', name: '_token', type: 'address' },
{ internalType: 'address', name: 'destination', type: 'address' },
{ internalType: 'uint256', name: 'amount', type: 'uint256' },
],
name: 'sweepTokenAmount',
outputs: [],
stateMutability: 'nonpayable',
type: 'function',
},
{
inputs: [],
name: 'sweeper',
outputs: [{ internalType: 'address', name: '', type: 'address' }],
stateMutability: 'view',
type: 'function',
},
{
inputs: [
{
components: [
{ internalType: 'uint256', name: 'recipientAmount', type: 'uint256' },
{ internalType: 'uint256', name: 'deadline', type: 'uint256' },
{
internalType: 'address payable',
name: 'recipient',
type: 'address',
},
{
internalType: 'address',
name: 'recipientCurrency',
type: 'address',
},
{
internalType: 'address',
name: 'refundDestination',
type: 'address',
},
{ internalType: 'uint256', name: 'feeAmount', type: 'uint256' },
{ internalType: 'bytes16', name: 'id', type: 'bytes16' },
{ internalType: 'address', name: 'operator', type: 'address' },
{ internalType: 'bytes', name: 'signature', type: 'bytes' },
{ internalType: 'bytes', name: 'prefix', type: 'bytes' },
],
internalType: 'struct TransferIntent',
name: '_intent',
type: 'tuple',
},
],

https://openrouter.ai/docs/llms-full.txt 120/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
name: 'transferNative',
outputs: [],
stateMutability: 'payable',
type: 'function',
},
{
inputs: [{ internalType: 'address', name: 'newOwner', type: 'address' }],
name: 'transferOwnership',
outputs: [],
stateMutability: 'nonpayable',
type: 'function',
},
{
inputs: [
{
components: [
{ internalType: 'uint256', name: 'recipientAmount', type: 'uint256' },
{ internalType: 'uint256', name: 'deadline', type: 'uint256' },
{
internalType: 'address payable',
name: 'recipient',
type: 'address',
},
{
internalType: 'address',
name: 'recipientCurrency',
type: 'address',
},
{
internalType: 'address',
name: 'refundDestination',
type: 'address',
},
{ internalType: 'uint256', name: 'feeAmount', type: 'uint256' },
{ internalType: 'bytes16', name: 'id', type: 'bytes16' },
{ internalType: 'address', name: 'operator', type: 'address' },
{ internalType: 'bytes', name: 'signature', type: 'bytes' },
{ internalType: 'bytes', name: 'prefix', type: 'bytes' },
],
internalType: 'struct TransferIntent',
name: '_intent',
type: 'tuple',
},
{
components: [
{
components: [
{
components: [
{ internalType: 'address', name: 'token', type: 'address' },
{ internalType: 'uint256', name: 'amount', type: 'uint256' },
],
internalType: 'struct ISignatureTransfer.TokenPermissions',
name: 'permitted',
type: 'tuple',
},
{ internalType: 'uint256', name: 'nonce', type: 'uint256' },
{ internalType: 'uint256', name: 'deadline', type: 'uint256' },
],
internalType: 'struct ISignatureTransfer.PermitTransferFrom',
name: 'permit',
type: 'tuple',
},
{
components: [
{ internalType: 'address', name: 'to', type: 'address' },
{
internalType: 'uint256',
name: 'requestedAmount',

https://openrouter.ai/docs/llms-full.txt 121/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
type: 'uint256',
},
],
internalType: 'struct ISignatureTransfer.SignatureTransferDetails',
name: 'transferDetails',
type: 'tuple',
},
{ internalType: 'bytes', name: 'signature', type: 'bytes' },
],
internalType: 'struct Permit2SignatureTransferData',
name: '_signatureTransferData',
type: 'tuple',
},
],
name: 'transferToken',
outputs: [],
stateMutability: 'nonpayable',
type: 'function',
},
{
inputs: [
{
components: [
{ internalType: 'uint256', name: 'recipientAmount', type: 'uint256' },
{ internalType: 'uint256', name: 'deadline', type: 'uint256' },
{
internalType: 'address payable',
name: 'recipient',
type: 'address',
},
{
internalType: 'address',
name: 'recipientCurrency',
type: 'address',
},
{
internalType: 'address',
name: 'refundDestination',
type: 'address',
},
{ internalType: 'uint256', name: 'feeAmount', type: 'uint256' },
{ internalType: 'bytes16', name: 'id', type: 'bytes16' },
{ internalType: 'address', name: 'operator', type: 'address' },
{ internalType: 'bytes', name: 'signature', type: 'bytes' },
{ internalType: 'bytes', name: 'prefix', type: 'bytes' },
],
internalType: 'struct TransferIntent',
name: '_intent',
type: 'tuple',
},
],
name: 'transferTokenPreApproved',
outputs: [],
stateMutability: 'nonpayable',
type: 'function',
},
{
inputs: [],
name: 'unpause',
outputs: [],
stateMutability: 'nonpayable',
type: 'function',
},
{
inputs: [],
name: 'unregisterOperator',
outputs: [],
stateMutability: 'nonpayable',
type: 'function',

https://openrouter.ai/docs/llms-full.txt 122/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
},
{
inputs: [
{
components: [
{ internalType: 'uint256', name: 'recipientAmount', type: 'uint256' },
{ internalType: 'uint256', name: 'deadline', type: 'uint256' },
{
internalType: 'address payable',
name: 'recipient',
type: 'address',
},
{
internalType: 'address',
name: 'recipientCurrency',
type: 'address',
},
{
internalType: 'address',
name: 'refundDestination',
type: 'address',
},
{ internalType: 'uint256', name: 'feeAmount', type: 'uint256' },
{ internalType: 'bytes16', name: 'id', type: 'bytes16' },
{ internalType: 'address', name: 'operator', type: 'address' },
{ internalType: 'bytes', name: 'signature', type: 'bytes' },
{ internalType: 'bytes', name: 'prefix', type: 'bytes' },
],
internalType: 'struct TransferIntent',
name: '_intent',
type: 'tuple',
},
{
components: [
{
components: [
{
components: [
{ internalType: 'address', name: 'token', type: 'address' },
{ internalType: 'uint256', name: 'amount', type: 'uint256' },
],
internalType: 'struct ISignatureTransfer.TokenPermissions',
name: 'permitted',
type: 'tuple',
},
{ internalType: 'uint256', name: 'nonce', type: 'uint256' },
{ internalType: 'uint256', name: 'deadline', type: 'uint256' },
],
internalType: 'struct ISignatureTransfer.PermitTransferFrom',
name: 'permit',
type: 'tuple',
},
{
components: [
{ internalType: 'address', name: 'to', type: 'address' },
{
internalType: 'uint256',
name: 'requestedAmount',
type: 'uint256',
},
],
internalType: 'struct ISignatureTransfer.SignatureTransferDetails',
name: 'transferDetails',
type: 'tuple',
},
{ internalType: 'bytes', name: 'signature', type: 'bytes' },
],
internalType: 'struct Permit2SignatureTransferData',
name: '_signatureTransferData',

https://openrouter.ai/docs/llms-full.txt 123/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
type: 'tuple',
},
],
name: 'unwrapAndTransfer',
outputs: [],
stateMutability: 'nonpayable',
type: 'function',
},
{
inputs: [
{
components: [
{ internalType: 'uint256', name: 'recipientAmount', type: 'uint256' },
{ internalType: 'uint256', name: 'deadline', type: 'uint256' },
{
internalType: 'address payable',
name: 'recipient',
type: 'address',
},
{
internalType: 'address',
name: 'recipientCurrency',
type: 'address',
},
{
internalType: 'address',
name: 'refundDestination',
type: 'address',
},
{ internalType: 'uint256', name: 'feeAmount', type: 'uint256' },
{ internalType: 'bytes16', name: 'id', type: 'bytes16' },
{ internalType: 'address', name: 'operator', type: 'address' },
{ internalType: 'bytes', name: 'signature', type: 'bytes' },
{ internalType: 'bytes', name: 'prefix', type: 'bytes' },
],
internalType: 'struct TransferIntent',
name: '_intent',
type: 'tuple',
},
],
name: 'unwrapAndTransferPreApproved',
outputs: [],
stateMutability: 'nonpayable',
type: 'function',
},
{
inputs: [
{
components: [
{ internalType: 'uint256', name: 'recipientAmount', type: 'uint256' },
{ internalType: 'uint256', name: 'deadline', type: 'uint256' },
{
internalType: 'address payable',
name: 'recipient',
type: 'address',
},
{
internalType: 'address',
name: 'recipientCurrency',
type: 'address',
},
{
internalType: 'address',
name: 'refundDestination',
type: 'address',
},
{ internalType: 'uint256', name: 'feeAmount', type: 'uint256' },
{ internalType: 'bytes16', name: 'id', type: 'bytes16' },
{ internalType: 'address', name: 'operator', type: 'address' },

https://openrouter.ai/docs/llms-full.txt 124/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
{ internalType: 'bytes', name: 'signature', type: 'bytes' },
{ internalType: 'bytes', name: 'prefix', type: 'bytes' },
],
internalType: 'struct TransferIntent',
name: '_intent',
type: 'tuple',
},
],
name: 'wrapAndTransfer',
outputs: [],
stateMutability: 'payable',
type: 'function',
},
{ stateMutability: 'payable', type: 'receive' },
];

// Set up viem clients


const publicClient = createPublicClient({
chain: base,
transport: http(),
});
const account = privateKeyToAccount('0x...');
const walletClient = createWalletClient({
chain: base,
transport: http(),
account,
});

// Use the calldata included in the charge response


const { contract_address } =
responseJSON.data.web3_data.transfer_intent.metadata;
const call_data = responseJSON.data.web3_data.transfer_intent.call_data;

// When transacting in ETH, a pool fees tier of 500 (the lowest) is very
// likely to be sufficient. However, if you plan to swap with a different
// contract method, using less-common ERC-20 tokens, it is recommended to
// call that chain's Uniswap QuoterV2 contract to check its liquidity.
// Depending on the results, choose the lowest fee tier which has enough
// liquidity in the pool.
const poolFeesTier = 500;

// Simulate the transaction first to prevent most common revert reasons


const { request } = await publicClient.simulateContract({
abi,
account,
address: contract_address,
functionName: 'swapAndTransferUniswapV3Native',
args: [
{
recipientAmount: BigInt(call_data.recipient_amount),
deadline: BigInt(
Math.floor(new Date(call_data.deadline).getTime() / 1000),
),
recipient: call_data.recipient,
recipientCurrency: call_data.recipient_currency,
refundDestination: call_data.refund_destination,
feeAmount: BigInt(call_data.fee_amount),
id: call_data.id,
operator: call_data.operator,
signature: call_data.signature,
prefix: call_data.prefix,
},
poolFeesTier,
],
// Transaction value in ETH. You'll want to include a little extra to
// ensure the transaction & swap is successful. All excess funds return
// back to your sender address afterwards.
value: parseEther('0.004'),
});

https://openrouter.ai/docs/llms-full.txt 125/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

// Send the transaction on chain


const txHash = await walletClient.writeContract(request);
console.log('Transaction hash:', txHash);
```

Once the transaction succeeds on chain, we'll add credits to your account. You can track the
transaction status using the returned transaction hash.

Credit purchases lower than \$500 will be immediately credited once the transaction is on chain.
Above \$500, there is a \~15 minute confirmation delay, ensuring the chain does not re-org your
purchase.

## Detecting Low Balance

While it is possible to simply run down the balance until your app starts receiving 402 error
codes for insufficient credits, this gap in service while topping up might not be desirable.

To avoid this, you can periodically call the `GET /api/v1/credits` endpoint to check your
available credits.

```typescript
const response = await fetch('https://openrouter.ai/api/v1/credits', {
method: 'GET',
headers: { Authorization: 'Bearer <OPENROUTER_API_KEY>' },
});
const { data } = await response.json();
```

The response includes your total credits purchased and usage, where your current balance is the
difference between the two:

```json
{
"data": {
"total_credits": 50.0,
"total_usage": 42.0
}
}
```

Note that these values are cached, and may be up to 60 seconds stale.

# OAuth PKCE

> Implement secure user authentication with OpenRouter using OAuth PKCE. Complete guide to setting
up and managing OAuth authentication flows.

Users can connect to OpenRouter in one click using [Proof Key for Code Exchange (PKCE)]
(https://oauth.net/2/pkce/).

Here's a step-by-step guide:

## PKCE Guide

### Step 1: Send your user to OpenRouter

To start the PKCE flow, send your user to OpenRouter's `/auth` URL with a `callback_url` parameter
pointing back to your site:

<CodeGroup>
```txt title="With S256 Code Challenge (Recommended)" wordWrap
https://openrouter.ai/auth?callback_url=<YOUR_SITE_URL>&code_challenge=
<CODE_CHALLENGE>&code_challenge_method=S256
```

```txt title="With Plain Code Challenge" wordWrap


https://openrouter.ai/auth?callback_url=<YOUR_SITE_URL>&code_challenge=

https://openrouter.ai/docs/llms-full.txt 126/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
<CODE_CHALLENGE>&code_challenge_method=plain
```

```txt title="Without Code Challenge" wordWrap


https://openrouter.ai/auth?callback_url=<YOUR_SITE_URL>
```
</CodeGroup>

The `code_challenge` parameter is optional but recommended.

Your user will be prompted to log in to OpenRouter and authorize your app. After authorization,
they will be redirected back to your site with a `code` parameter in the URL:

![Alt text](file:ffe84d8d-dc73-477e-8d7f-cd5599652a8a)

<Tip title="Use SHA-256 for Maximum Security">


For maximum security, set `code_challenge_method` to `S256`, and set `code_challenge` to the
base64 encoding of the sha256 hash of `code_verifier`.

For more info, [visit Auth0's docs](https://auth0.com/docs/get-started/authentication-and-


authorization-flow/call-your-api-using-the-authorization-code-flow-with-pkce#parameters).
</Tip>

#### How to Generate a Code Challenge

The following example leverages the Web Crypto API and the Buffer API to generate a code challenge
for the S256 method. You will need a bundler to use the Buffer API in the web browser:

<CodeGroup>
```typescript title="Generate Code Challenge"
import { Buffer } from 'buffer';

async function createSHA256CodeChallenge(input: string) {


const encoder = new TextEncoder();
const data = encoder.encode(input);
const hash = await crypto.subtle.digest('SHA-256', data);
return Buffer.from(hash).toString('base64url');
}

const codeVerifier = 'your-random-string';


const generatedCodeChallenge = await createSHA256CodeChallenge(codeVerifier);
```
</CodeGroup>

#### Localhost Apps

If your app is a local-first app or otherwise doesn't have a public URL, it is recommended to test
with `http://localhost:3000` as the callback and referrer URLs.

When moving to production, replace the localhost/private referrer URL with a public GitHub repo or
a link to your project website.

### Step 2: Exchange the code for a user-controlled API key

After the user logs in with OpenRouter, they are redirected back to your site with a `code`
parameter in the URL:

![Alt text](file:7ca9e7fc-66d8-46dd-a9d7-cbac95b4d423)

Extract this code using the browser API:

<CodeGroup>
```typescript title="Extract Code"
const urlParams = new URLSearchParams(window.location.search);
const code = urlParams.get('code');
```
</CodeGroup>

Then use it to make an API call to `https://openrouter.ai/api/v1/auth/keys` to exchange the code

https://openrouter.ai/docs/llms-full.txt 127/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
for a user-controlled API key:

<CodeGroup>
```typescript title="Exchange Code"
const response = await fetch('https://openrouter.ai/api/v1/auth/keys', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
code: '<CODE_FROM_QUERY_PARAM>',
code_verifier: '<CODE_VERIFIER>', // If code_challenge was used
code_challenge_method: '<CODE_CHALLENGE_METHOD>', // If code_challenge was used
}),
});

const { key } = await response.json();


```
</CodeGroup>

And that's it for the PKCE flow!

### Step 3: Use the API key

Store the API key securely within the user's browser or in your own database, and use it to [make
OpenRouter requests](/api-reference/completion).

<CodeGroup>
```typescript title="Make an OpenRouter request"
fetch('https://openrouter.ai/api/v1/chat/completions', {
method: 'POST',
headers: {
Authorization: 'Bearer <API_KEY>',
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'openai/gpt-4o',
messages: [
{
role: 'user',
content: 'Hello!',
},
],
}),
});
```
</CodeGroup>

## Error Codes

* `400 Invalid code_challenge_method`: Make sure you're using the same code challenge method in
step 1 as in step 2.
* `403 Invalid code or code_verifier`: Make sure your user is logged in to OpenRouter, and that
`code_verifier` and `code_challenge_method` are correct.
* `405 Method Not Allowed`: Make sure you're using `POST` and `HTTPS` for your request.

## External Tools

* [PKCE Tools](https://example-app.com/pkce)
* [Online PKCE Generator](https://tonyxu-io.github.io/pkce-generator/)

# Using MCP Servers with OpenRouter

> Learn how to use MCP Servers with OpenRouter

MCP servers are a popular way of providing LLMs with tool calling abilities, and are an
alternative to using OpenAI-compatible tool calling.

https://openrouter.ai/docs/llms-full.txt 128/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
By converting MCP (Anthropic) tool definitions to OpenAI-compatible tool definitions, you can use
MCP servers with OpenRouter.

In this example, we'll use [Anthropic's MCP client SDK]


(https://github.com/modelcontextprotocol/python-sdk?tab=readme-ov-file#writing-mcp-clients) to
interact with the File System MCP, all with OpenRouter under the hood.

<Warning>
Note that interacting with MCP servers is more complex than calling a REST
endpoint. The MCP protocol is stateful and requires session management. The
example below uses the MCP client SDK, but is still somewhat complex.
</Warning>

First, some setup. In order to run this you will need to pip install the packages, and create a
`.env` file with OPENAI\_API\_KEY set. This example also assumes the directory `/Applications`
exists.

```python
import asyncio
from typing import Optional
from contextlib import AsyncExitStack

from mcp import ClientSession, StdioServerParameters


from mcp.client.stdio import stdio_client

from openai import OpenAI


from dotenv import load_dotenv
import json

load_dotenv() # load environment variables from .env

MODEL = "anthropic/claude-3-7-sonnet"

SERVER_CONFIG = {
"command": "npx",
"args": ["-y",
"@modelcontextprotocol/server-filesystem",
f"/Applications/"],
"env": None
}
```

Next, our helper function to convert MCP tool definitions to OpenAI tool definitions:

```python

def convert_tool_format(tool):
converted_tool = {
"type": "function",
"function": {
"name": tool.name,
"description": tool.description,
"parameters": {
"type": "object",
"properties": tool.inputSchema["properties"],
"required": tool.inputSchema["required"]
}
}
}
return converted_tool

```

And, the MCP client itself; a regrettable \~100 lines of code. Note that the SERVER\_CONFIG is
hard-coded into the client, but of course could be parameterized for other MCP servers.

```python
class MCPClient:
def __init__(self):

https://openrouter.ai/docs/llms-full.txt 129/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
self.session: Optional[ClientSession] = None
self.exit_stack = AsyncExitStack()
self.openai = OpenAI(
base_url="https://openrouter.ai/api/v1"
)

async def connect_to_server(self, server_config):


server_params = StdioServerParameters(**server_config)
stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
self.stdio, self.write = stdio_transport
self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio,
self.write))

await self.session.initialize()

# List available tools from the MCP server


response = await self.session.list_tools()
print("\nConnected to server with tools:", [tool.name for tool in response.tools])

self.messages = []

async def process_query(self, query: str) -> str:

self.messages.append({
"role": "user",
"content": query
})

response = await self.session.list_tools()


available_tools = [convert_tool_format(tool) for tool in response.tools]

response = self.openai.chat.completions.create(
model=MODEL,
tools=available_tools,
messages=self.messages
)
self.messages.append(response.choices[0].message.model_dump())

final_text = []
content = response.choices[0].message
if content.tool_calls is not None:
tool_name = content.tool_calls[0].function.name
tool_args = content.tool_calls[0].function.arguments
tool_args = json.loads(tool_args) if tool_args else {}

# Execute tool call


try:
result = await self.session.call_tool(tool_name, tool_args)
final_text.append(f"[Calling tool {tool_name} with args {tool_args}]")
except Exception as e:
print(f"Error calling tool {tool_name}: {e}")
result = None

self.messages.append({
"role": "tool",
"tool_call_id": content.tool_calls[0].id,
"name": tool_name,
"content": result.content
})

response = self.openai.chat.completions.create(
model=MODEL,
max_tokens=1000,
messages=self.messages,
)

final_text.append(response.choices[0].message.content)
else:
final_text.append(content.content)

https://openrouter.ai/docs/llms-full.txt 130/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

return "\n".join(final_text)

async def chat_loop(self):


"""Run an interactive chat loop"""
print("\nMCP Client Started!")
print("Type your queries or 'quit' to exit.")

while True:
try:
query = input("\nQuery: ").strip()
result = await self.process_query(query)
print("Result:")
print(result)

except Exception as e:
print(f"Error: {str(e)}")

async def cleanup(self):


await self.exit_stack.aclose()

async def main():


client = MCPClient()
try:
await client.connect_to_server(SERVER_CONFIG)
await client.chat_loop()
finally:
await client.cleanup()

if __name__ == "__main__":
import sys
asyncio.run(main())
```

Assembling all of the above code into mcp-client.py, you get a client that behaves as follows
(some outputs truncated for brevity):

```bash
% python mcp-client.py

Secure MCP Filesystem Server running on stdio


Allowed directories: [ '/Applications' ]

Connected to server with tools: ['read_file', 'read_multiple_files', 'write_file'...]

MCP Client Started!


Type your queries or 'quit' to exit.

Query: Do I have microsoft office installed?

Result:
[Calling tool list_allowed_directories with args {}]
I can check if Microsoft Office is installed in the Applications folder:

Query: continue

Result:
[Calling tool search_files with args {'path': '/Applications', 'pattern': 'Microsoft'}]
Now let me check specifically for Microsoft Office applications:

Query: continue

Result:
I can see from the search results that Microsoft Office is indeed installed on your system.
The search found the following main Microsoft Office applications:

1. Microsoft Excel - /Applications/Microsoft Excel.app


2. Microsoft PowerPoint - /Applications/Microsoft PowerPoint.app
3. Microsoft Word - /Applications/Microsoft Word.app

https://openrouter.ai/docs/llms-full.txt 131/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
4. OneDrive - /Applications/OneDrive.app (which includes Microsoft SharePoint integration)
```

# Provider Integration

> Learn how to integrate your AI models with OpenRouter. Complete guide for providers to make
their models available through OpenRouter's unified API.

## For Providers

If you'd like to be a model provider and sell inference on OpenRouter, [fill out our form]
(https://openrouter.notion.site/15a2fd57c4dc8067bc61ecd5263b31fd) to get started.

To be eligible to provide inference on OpenRouter you must have the following:

### 1. List Models Endpoint

You must implement an endpoint that returns all models that should be served by OpenRouter. At
this endpoint, please return a list of all available models on your platform. Below is an example
of the response format:

```json
{
"data": [
{
"id": "anthropic/claude-2.0",
"name": "Anthropic: Claude v2.0",
"created": 1690502400,
"description": "Anthropic's flagship model...", // Optional
"context_length": 100000, // Required
"max_completion_tokens": 4096, // Optional
"quantization": "fp8", // Required
"pricing": {
"prompt": "0.000008", // pricing per 1 token
"completion": "0.000024", // pricing per 1 token
"image": "0", // pricing per 1 image
"request": "0" // pricing per 1 request
}
}
]
}
```

NOTE: `pricing` fields are in string format to avoid floating point precision issues, and must be
in USD.

Valid quantization values are:


`int4`, `int8`, `fp4`, `fp6`, `fp8`, `fp16`, `bf16`, `fp32`

### 2. Auto Top Up or Invoicing

For OpenRouter to use the provider we must be able to pay for inference automatically. This can be
done via auto top up or invoicing.

# Reasoning Tokens

> Learn how to use reasoning tokens to enhance AI model outputs. Implement step-by-step reasoning
traces for better decision making and transparency.

For models that support it, the OpenRouter API can return **Reasoning Tokens**, also known as
thinking tokens. OpenRouter normalizes the different ways of customizing the amount of reasoning
tokens that the model will use, providing a unified interface across different providers.

Reasoning tokens provide a transparent look into the reasoning steps taken by a model. Reasoning
tokens are considered output tokens and charged accordingly.

Reasoning tokens are included in the response by default if the model decides to output them.

https://openrouter.ai/docs/llms-full.txt 132/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
Reasoning tokens will appear in the `reasoning` field of each message, unless you decide to
exclude them.

<Note title="Some reasoning models do not return their reasoning tokens">


While most models and providers make reasoning tokens available in the
response, some (like the OpenAI o-series and Gemini Flash Thinking) do not.
</Note>

## Controlling Reasoning Tokens

You can control reasoning tokens in your requests using the `reasoning` parameter:

```json
{
"model": "your-model",
"messages": [],
"reasoning": {
// One of the following (not both):
"effort": "high", // Can be "high", "medium", or "low" (OpenAI-style)
"max_tokens": 2000, // Specific token limit (Anthropic-style)

// Optional: Default is false. All models support this.


"exclude": false // Set to true to exclude reasoning tokens from response
}
}
```

The `reasoning` config object consolidates settings for controlling reasoning strength across
different models. See the Note for each option below to see which models are supported and how
other models will behave.

### Max Tokens for Reasoning

<Note title="Supported models">


Currently supported by Anthropic and Gemini thinking models
</Note>

For models that support reasoning token allocation, you can control it like this:

* `"max_tokens": 2000` - Directly specifies the maximum number of tokens to use for reasoning

For models that only support `reasoning.effort` (see below), the `max_tokens` value will be used
to determine the effort level.

### Reasoning Effort Level

<Note title="Supported models">


Currently supported by the OpenAI o-series
</Note>

* `"effort": "high"` - Allocates a large portion of tokens for reasoning (approximately 80% of
max\_tokens)
* `"effort": "medium"` - Allocates a moderate portion of tokens (approximately 50% of max\_tokens)
* `"effort": "low"` - Allocates a smaller portion of tokens (approximately 20% of max\_tokens)

For models that only support `reasoning.max_tokens`, the effort level will be set based on the
percentages above.

### Excluding Reasoning Tokens

If you want the model to use reasoning internally but not include it in the response:

* `"exclude": true` - The model will still use reasoning, but it won't be returned in the response

Reasoning tokens will appear in the `reasoning` field of each message.

## Legacy Parameters

For backward compatibility, OpenRouter still supports the following legacy parameters:

https://openrouter.ai/docs/llms-full.txt 133/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

* `include_reasoning: true` - Equivalent to `reasoning: {}`


* `include_reasoning: false` - Equivalent to `reasoning: { exclude: true }`

However, we recommend using the new unified `reasoning` parameter for better control and future
compatibility.

## Examples

### Basic Usage with Reasoning Tokens

<Template
data={{
API_KEY_REF,
MODEL: "openai/o3-mini"
}}
>
<CodeGroup>
```python Python
import requests
import json

url = "https://openrouter.ai/api/v1/chat/completions"
headers = {
"Authorization": f"Bearer {{API_KEY_REF}}",
"Content-Type": "application/json"
}
payload = {
"model": "{{MODEL}}",
"messages": [
{"role": "user", "content": "How would you build the world's tallest skyscraper?"}
],
"reasoning": {
"effort": "high" # Use high reasoning effort
}
}

response = requests.post(url, headers=headers, data=json.dumps(payload))


print(response.json()['choices'][0]['message']['reasoning'])
```

```typescript TypeScript
import OpenAI from 'openai';

const openai = new OpenAI({


baseURL: 'https://openrouter.ai/api/v1',
apiKey: '{{API_KEY_REF}}',
});

async function getResponseWithReasoning() {


const response = await openai.chat.completions.create({
model: '{{MODEL}}',
messages: [
{
role: 'user',
content: "How would you build the world's tallest skyscraper?",
},
],
reasoning: {
effort: 'high', // Use high reasoning effort
},
});

console.log('REASONING:', response.choices[0].message.reasoning);
console.log('CONTENT:', response.choices[0].message.content);
}

getResponseWithReasoning();
```

https://openrouter.ai/docs/llms-full.txt 134/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
</CodeGroup>
</Template>

### Using Max Tokens for Reasoning

For models that support direct token allocation (like Anthropic models), you can specify the exact
number of tokens to use for reasoning:

<Template
data={{
API_KEY_REF,
MODEL: "anthropic/claude-3.7-sonnet"
}}
>
<CodeGroup>
```python Python
import requests
import json

url = "https://openrouter.ai/api/v1/chat/completions"
headers = {
"Authorization": f"Bearer {{API_KEY_REF}}",
"Content-Type": "application/json"
}
payload = {
"model": "{{MODEL}}",
"messages": [
{"role": "user", "content": "What's the most efficient algorithm for sorting a large
dataset?"}
],
"reasoning": {
"max_tokens": 2000 # Allocate 2000 tokens (or approximate effort) for reasoning
}
}

response = requests.post(url, headers=headers, data=json.dumps(payload))


print(response.json()['choices'][0]['message']['reasoning'])
print(response.json()['choices'][0]['message']['content'])
```

```typescript TypeScript
import OpenAI from 'openai';

const openai = new OpenAI({


baseURL: 'https://openrouter.ai/api/v1',
apiKey: '{{API_KEY_REF}}',
});

async function getResponseWithReasoning() {


const response = await openai.chat.completions.create({
model: '{{MODEL}}',
messages: [
{
role: 'user',
content: "How would you build the world's tallest skyscraper?",
},
],
reasoning: {
max_tokens: 2000, // Allocate 2000 tokens (or approximate effort) for reasoning
},
});

console.log('REASONING:', response.choices[0].message.reasoning);
console.log('CONTENT:', response.choices[0].message.content);
}

getResponseWithReasoning();
```
</CodeGroup>

https://openrouter.ai/docs/llms-full.txt 135/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
</Template>

### Excluding Reasoning Tokens from Response

If you want the model to use reasoning internally but not include it in the response:

<Template
data={{
API_KEY_REF,
MODEL: "deepseek/deepseek-r1"
}}
>
<CodeGroup>
```python Python
import requests
import json

url = "https://openrouter.ai/api/v1/chat/completions"
headers = {
"Authorization": f"Bearer {{API_KEY_REF}}",
"Content-Type": "application/json"
}
payload = {
"model": "{{MODEL}}",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms."}
],
"reasoning": {
"effort": "high",
"exclude": true # Use reasoning but don't include it in the response
}
}

response = requests.post(url, headers=headers, data=json.dumps(payload))


# No reasoning field in the response
print(response.json()['choices'][0]['message']['content'])
```

```typescript TypeScript
import OpenAI from 'openai';

const openai = new OpenAI({


baseURL: 'https://openrouter.ai/api/v1',
apiKey: '{{API_KEY_REF}}',
});

async function getResponseWithReasoning() {


const response = await openai.chat.completions.create({
model: '{{MODEL}}',
messages: [
{
role: 'user',
content: "How would you build the world's tallest skyscraper?",
},
],
reasoning: {
effort: 'high',
exclude: true, // Use reasoning but don't include it in the response
},
});

console.log('REASONING:', response.choices[0].message.reasoning);
console.log('CONTENT:', response.choices[0].message.content);
}

getResponseWithReasoning();
```
</CodeGroup>
</Template>

https://openrouter.ai/docs/llms-full.txt 136/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

### Advanced Usage: Reasoning Chain-of-Thought

This example shows how to use reasoning tokens in a more complex workflow. It injects one model's
reasoning into another model to improve its response quality:

<Template
data={{
API_KEY_REF,
}}
>
<CodeGroup>
```python Python
import requests
import json

question = "Which is bigger: 9.11 or 9.9?"

url = "https://openrouter.ai/api/v1/chat/completions"
headers = {
"Authorization": f"Bearer {{API_KEY_REF}}",
"Content-Type": "application/json"
}

def do_req(model, content, reasoning_config=None):


payload = {
"model": model,
"messages": [
{"role": "user", "content": content}
],
"stop": "</think>"
}

return requests.post(url, headers=headers, data=json.dumps(payload))

# Get reasoning from a capable model


content = f"{question} Please think this through, but don't output an answer"
reasoning_response = do_req("deepseek/deepseek-r1", content)
reasoning = reasoning_response.json()['choices'][0]['message']['reasoning']

# Let's test! Here's the naive response:


simple_response = do_req("openai/gpt-4o-mini", question)
print(simple_response.json()['choices'][0]['message']['content'])

# Here's the response with the reasoning token injected:


content = f"{question}. Here is some context to help you: {reasoning}"
smart_response = do_req("openai/gpt-4o-mini", content)
print(smart_response.json()['choices'][0]['message']['content'])
```

```typescript TypeScript
import OpenAI from 'openai';

const openai = new OpenAI({


baseURL: 'https://openrouter.ai/api/v1',
apiKey,
});

async function doReq(model, content, reasoningConfig) {


const payload = {
model,
messages: [{ role: 'user', content }],
stop: '</think>',
...reasoningConfig,
};

return openai.chat.completions.create(payload);
}

https://openrouter.ai/docs/llms-full.txt 137/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
async function getResponseWithReasoning() {
const question = 'Which is bigger: 9.11 or 9.9?';
const reasoningResponse = await doReq(
'deepseek/deepseek-r1',
`${question} Please think this through, but don't output an answer`,
);
const reasoning = reasoningResponse.choices[0].message.reasoning;

// Let's test! Here's the naive response:


const simpleResponse = await doReq('openai/gpt-4o-mini', question);
console.log(simpleResponse.choices[0].message.content);

// Here's the response with the reasoning token injected:


const content = `${question}. Here is some context to help you: ${reasoning}`;
const smartResponse = await doReq('openai/gpt-4o-mini', content);
console.log(smartResponse.choices[0].message.content);
}

getResponseWithReasoning();
```
</CodeGroup>
</Template>

## Provider-Specific Reasoning Implementation

### Anthropic Models with Reasoning Tokens

The latest Claude models, such as [anthropic/claude-3.7-sonnet]


(https://openrouter.ai/anthropic/claude-3.7-sonnet), support working with and returning reasoning
tokens.

You can enable reasoning on Anthropic models in two ways:

1. Using the `:thinking` variant suffix (e.g., `anthropic/claude-3.7-sonnet:thinking`). The


thinking variant defaults to high effort.
2. Using the unified `reasoning` parameter with either `effort` or `max_tokens`

#### Reasoning Max Tokens for Anthropic Models

When using Anthropic models with reasoning:

* When using the `reasoning.max_tokens` parameter, that value is used directly with a minimum of
1024 tokens.
* When using the `:thinking` variant suffix or the `reasoning.effort` parameter, the
budget\_tokens are calculated based on the `max_tokens` value.

The reasoning token allocation is capped at 32,000 tokens maximum and 1024 tokens minimum. The
formula for calculating the budget\_tokens is: `budget_tokens = max(min(max_tokens *
{effort_ratio}, 32000), 1024)`

effort\_ratio is 0.8 for high effort, 0.5 for medium effort, and 0.2 for low effort.

**Important**: `max_tokens` must be strictly higher than the reasoning budget to ensure there are
tokens available for the final response after thinking.

<Note title="Token Usage and Billing">


Please note that reasoning tokens are counted as output tokens for billing
purposes. Using reasoning tokens will increase your token usage but can
significantly improve the quality of model responses.
</Note>

### Examples with Anthropic Models

#### Example 1: Streaming mode with reasoning tokens

<Template
data={{
API_KEY_REF,
MODEL: "anthropic/claude-3.7-sonnet"

https://openrouter.ai/docs/llms-full.txt 138/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
}}
>
<CodeGroup>
```python Python
from openai import OpenAI

client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key="{{API_KEY_REF}}",
)

def chat_completion_with_reasoning(messages):
response = client.chat.completions.create(
model="{{MODEL}}",
messages=messages,
max_tokens=10000,
reasoning={
"max_tokens": 8000 # Directly specify reasoning token budget
},
stream=True
)
return response

for chunk in chat_completion_with_reasoning([


{"role": "user", "content": "What's bigger, 9.9 or 9.11?"}
]):
if hasattr(chunk.choices[0].delta, 'reasoning') and chunk.choices[0].delta.reasoning:
print(f"REASONING: {chunk.choices[0].delta.reasoning}")
elif chunk.choices[0].delta.content:
print(f"CONTENT: {chunk.choices[0].delta.content}")
```

```typescript TypeScript
import OpenAI from 'openai';

const openai = new OpenAI({


baseURL: 'https://openrouter.ai/api/v1',
apiKey,
});

async function chatCompletionWithReasoning(messages) {


const response = await openai.chat.completions.create({
model: '{{MODEL}}',
messages,
maxTokens: 10000,
reasoning: {
maxTokens: 8000, // Directly specify reasoning token budget
},
stream: true,
});

return response;
}

(async () => {
for await (const chunk of chatCompletionWithReasoning([
{ role: 'user', content: "What's bigger, 9.9 or 9.11?" },
])) {
if (chunk.choices[0].delta.reasoning) {
console.log(`REASONING: ${chunk.choices[0].delta.reasoning}`);
} else if (chunk.choices[0].delta.content) {
console.log(`CONTENT: ${chunk.choices[0].delta.content}`);
}
}
})();
```
</CodeGroup>
</Template>

https://openrouter.ai/docs/llms-full.txt 139/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

# Usage Accounting

> Learn how to track AI model usage including prompt tokens, completion tokens, and cached tokens
without additional API calls.

The OpenRouter API provides built-in **Usage Accounting** that allows you to track AI model usage
without making additional API calls. This feature provides detailed information about token
counts, costs, and caching status directly in your API responses.

## Usage Information

When enabled, the API will return detailed usage information including:

1. Prompt and completion token counts using the model's native tokenizer
2. Cost in credits
3. Reasoning token counts (if applicable)
4. Cached token counts (if available)

This information is included in the last SSE message for streaming responses, or in the complete
response for non-streaming requests.

## Enabling Usage Accounting

You can enable usage accounting in your requests by including the `usage` parameter:

```json
{
"model": "your-model",
"messages": [],
"usage": {
"include": true
}
}
```

## Response Format

When usage accounting is enabled, the response will include a `usage` object with detailed token
information:

```json
{
"object": "chat.completion.chunk",
"usage": {
"completion_tokens": 2,
"completion_tokens_details": {
"reasoning_tokens": 0
},
"cost": 197,
"prompt_tokens": 194,
"prompt_tokens_details": {
"cached_tokens": 0
},
"total_tokens": 196
}
}
```

<Note title="Performance Impact">


Enabling usage accounting will add a few hundred milliseconds to the last
response as the API calculates token counts and costs. This only affects the
final message and does not impact overall streaming performance.
</Note>

## Benefits

1. **Efficiency**: Get usage information without making separate API calls


2. **Accuracy**: Token counts are calculated using the model's native tokenizer

https://openrouter.ai/docs/llms-full.txt 140/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
3. **Transparency**: Track costs and cached token usage in real-time
4. **Detailed Breakdown**: Separate counts for prompt, completion, reasoning, and cached tokens

## Best Practices

1. Enable usage tracking when you need to monitor token consumption or costs
2. Account for the slight delay in the final response when usage accounting is enabled
3. Consider implementing usage tracking in development to optimize token usage before production
4. Use the cached token information to optimize your application's performance

## Alternative: Getting Usage via Generation ID

You can also retrieve usage information asynchronously by using the generation ID returned from
your API calls. This is particularly useful when you want to fetch usage statistics after the
completion has finished or when you need to audit historical usage.

To use this method:

1. Make your chat completion request as normal


2. Note the `id` field in the response
3. Use that ID to fetch usage information via the `/generation` endpoint

For more details on this approach, see the [Get a Generation](/docs/api-reference/get-a-


generation) documentation.

## Examples

### Basic Usage with Token Tracking

<Template
data={{
API_KEY_REF,
MODEL: "anthropic/claude-3-opus"
}}
>
<CodeGroup>
```python Python
import requests
import json

url = "https://openrouter.ai/api/v1/chat/completions"
headers = {
"Authorization": f"Bearer {{API_KEY_REF}}",
"Content-Type": "application/json"
}
payload = {
"model": "{{MODEL}}",
"messages": [
{"role": "user", "content": "What is the capital of France?"}
],
"usage": {
"include": True
}
}

response = requests.post(url, headers=headers, data=json.dumps(payload))


print("Response:", response.json()['choices'][0]['message']['content'])
print("Usage Stats:", response.json()['usage'])
```

```typescript TypeScript
import OpenAI from 'openai';

const openai = new OpenAI({


baseURL: 'https://openrouter.ai/api/v1',
apiKey: '{{API_KEY_REF}}',
});

async function getResponseWithUsage() {

https://openrouter.ai/docs/llms-full.txt 141/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
const response = await openai.chat.completions.create({
model: '{{MODEL}}',
messages: [
{
role: 'user',
content: 'What is the capital of France?',
},
],
usage: {
include: true,
},
});

console.log('Response:', response.choices[0].message.content);
console.log('Usage Stats:', response.usage);
}

getResponseWithUsage();
```
</CodeGroup>
</Template>

### Streaming with Usage Information

This example shows how to handle usage information in streaming mode:

<Template
data={{
API_KEY_REF,
MODEL: "anthropic/claude-3-opus"
}}
>
<CodeGroup>
```python Python
from openai import OpenAI

client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key="{{API_KEY_REF}}",
)

def chat_completion_with_usage(messages):
response = client.chat.completions.create(
model="{{MODEL}}",
messages=messages,
usage={
"include": True
},
stream=True
)
return response

for chunk in chat_completion_with_usage([


{"role": "user", "content": "Write a haiku about Paris."}
]):
if hasattr(chunk, 'usage'):
print(f"\nUsage Statistics:")
print(f"Total Tokens: {chunk.usage.total_tokens}")
print(f"Prompt Tokens: {chunk.usage.prompt_tokens}")
print(f"Completion Tokens: {chunk.usage.completion_tokens}")
print(f"Cost: {chunk.usage.cost} credits")
elif chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
```

```typescript TypeScript
import OpenAI from 'openai';

const openai = new OpenAI({

https://openrouter.ai/docs/llms-full.txt 142/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
baseURL: 'https://openrouter.ai/api/v1',
apiKey: '{{API_KEY_REF}}',
});

async function chatCompletionWithUsage(messages) {


const response = await openai.chat.completions.create({
model: '{{MODEL}}',
messages,
usage: {
include: true,
},
stream: true,
});

return response;
}

(async () => {
for await (const chunk of chatCompletionWithUsage([
{ role: 'user', content: 'Write a haiku about Paris.' },
])) {
if (chunk.usage) {
console.log('\nUsage Statistics:');
console.log(`Total Tokens: ${chunk.usage.total_tokens}`);
console.log(`Prompt Tokens: ${chunk.usage.prompt_tokens}`);
console.log(`Completion Tokens: ${chunk.usage.completion_tokens}`);
console.log(`Cost: ${chunk.usage.cost} credits`);
} else if (chunk.choices[0].delta.content) {
process.stdout.write(chunk.choices[0].delta.content);
}
}
})();
```
</CodeGroup>
</Template>

# Frameworks

> Integrate OpenRouter using popular frameworks and SDKs. Complete guides for OpenAI SDK,
LangChain, PydanticAI, and Vercel AI SDK integration.

You can find a few examples of using OpenRouter with other frameworks in [this Github repository]
(https://github.com/OpenRouterTeam/openrouter-examples). Here are some examples:

## Using the OpenAI SDK

* Using `pip install openai`: [github](https://github.com/OpenRouterTeam/openrouter-examples-


python/blob/main/src/openai_test.py).
* Using `npm i openai`: [github](https://github.com/OpenRouterTeam/openrouter-
examples/blob/main/examples/openai/index.ts).
<Tip>
You can also use
[Grit](https://app.grit.io/studio?key=RKC0n7ikOiTGTNVkI8uRS) to
automatically migrate your code. Simply run `npx @getgrit/launcher
openrouter`.
</Tip>

<CodeGroup>
```typescript title="TypeScript"
import OpenAI from "openai"

const openai = new OpenAI({


baseURL: "https://openrouter.ai/api/v1",
apiKey: "${API_KEY_REF}",
defaultHeaders: {
${getHeaderLines().join('\n ')}
},
})

https://openrouter.ai/docs/llms-full.txt 143/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt

async function main() {


const completion = await openai.chat.completions.create({
model: "${Model.GPT_4_Omni}",
messages: [
{ role: "user", content: "Say this is a test" }
],
})

console.log(completion.choices[0].message)
}
main();
```

```python title="Python"
from openai import OpenAI
from os import getenv

# gets API Key from environment variable OPENAI_API_KEY


client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=getenv("OPENROUTER_API_KEY"),
)

completion = client.chat.completions.create(
model="${Model.GPT_4_Omni}",
extra_headers={
"HTTP-Referer": "<YOUR_SITE_URL>", # Optional. Site URL for rankings on openrouter.ai.
"X-Title": "<YOUR_SITE_NAME>", # Optional. Site title for rankings on openrouter.ai.
},
# pass extra_body to access OpenRouter-only arguments.
# extra_body={
# "models": [
# "${Model.GPT_4_Omni}",
# "${Model.Mixtral_8x_22B_Instruct}"
# ]
# },
messages=[
{
"role": "user",
"content": "Say this is a test",
},
],
)
print(completion.choices[0].message.content)
```
</CodeGroup>

## Using LangChain

* Using [LangChain for Python](https://github.com/langchain-ai/langchain): [github]


(https://github.com/alexanderatallah/openrouter-
streamlit/blob/main/pages/2_Langchain_Quickstart.py)
* Using [LangChain.js](https://github.com/langchain-ai/langchainjs): [github]
(https://github.com/OpenRouterTeam/openrouter-examples/blob/main/examples/langchain/index.ts)
* Using [Streamlit](https://streamlit.io/): [github]
(https://github.com/alexanderatallah/openrouter-streamlit)

<CodeGroup>
```typescript title="TypeScript"
const chat = new ChatOpenAI(
{
modelName: '<model_name>',
temperature: 0.8,
streaming: true,
openAIApiKey: '${API_KEY_REF}',
},
{
basePath: 'https://openrouter.ai/api/v1',

https://openrouter.ai/docs/llms-full.txt 144/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
baseOptions: {
headers: {
'HTTP-Referer': '<YOUR_SITE_URL>', // Optional. Site URL for rankings on openrouter.ai.
'X-Title': '<YOUR_SITE_NAME>', // Optional. Site title for rankings on openrouter.ai.
},
},
},
);
```

```python title="Python"
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from os import getenv
from dotenv import load_dotenv

load_dotenv()

template = """Question: {question}


Answer: Let's think step by step."""

prompt = PromptTemplate(template=template, input_variables=["question"])

llm = ChatOpenAI(
openai_api_key=getenv("OPENROUTER_API_KEY"),
openai_api_base=getenv("OPENROUTER_BASE_URL"),
model_name="<model_name>",
model_kwargs={
"headers": {
"HTTP-Referer": getenv("YOUR_SITE_URL"),
"X-Title": getenv("YOUR_SITE_NAME"),
}
},
)

llm_chain = LLMChain(prompt=prompt, llm=llm)

question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"

print(llm_chain.run(question))
```
</CodeGroup>

***

## Using PydanticAI

[PydanticAI](https://github.com/pydantic/pydantic-ai) provides a high-level interface for working


with various LLM providers, including OpenRouter.

### Installation

```bash
pip install 'pydantic-ai-slim[openai]'
```

### Configuration

You can use OpenRouter with PydanticAI through its OpenAI-compatible interface:

```python
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel

model = OpenAIModel(
"anthropic/claude-3.5-sonnet", # or any other OpenRouter model
base_url="https://openrouter.ai/api/v1",
api_key="sk-or-...",

https://openrouter.ai/docs/llms-full.txt 145/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
)

agent = Agent(model)
result = await agent.run("What is the meaning of life?")
print(result)
```

For more details about using PydanticAI with OpenRouter, see the [PydanticAI documentation]
(https://ai.pydantic.dev/models/#api_key-argument).

***

## Vercel AI SDK

You can use the [Vercel AI SDK](https://www.npmjs.com/package/ai) to integrate OpenRouter with


your Next.js app. To get started, install [@openrouter/ai-sdk-provider]
(https://github.com/OpenRouterTeam/ai-sdk-provider):

```bash
npm install @openrouter/ai-sdk-provider
```

And then you can use [streamText()](https://sdk.vercel.ai/docs/reference/ai-sdk-core/stream-text)


API to stream text from OpenRouter.

<CodeGroup>
```typescript title="TypeScript"
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
import { streamText } from 'ai';
import { z } from 'zod';

export const getLasagnaRecipe = async (modelName: string) => {


const openrouter = createOpenRouter({
apiKey: '${API_KEY_REF}',
});

const response = streamText({


model: openrouter(modelName),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});

await response.consumeStream();
return response.text;
};

export const getWeather = async (modelName: string) => {


const openrouter = createOpenRouter({
apiKey: '${API_KEY_REF}',
});

const response = streamText({


model: openrouter(modelName),
prompt: 'What is the weather in San Francisco, CA in Fahrenheit?',
tools: {
getCurrentWeather: {
description: 'Get the current weather in a given location',
parameters: z.object({
location: z
.string()
.describe('The city and state, e.g. San Francisco, CA'),
unit: z.enum(['celsius', 'fahrenheit']).optional(),
}),
execute: async ({ location, unit = 'celsius' }) => {
// Mock response for the weather
const weatherData = {
'Boston, MA': {
celsius: '15°C',
fahrenheit: '59°F',
},

https://openrouter.ai/docs/llms-full.txt 146/147
5/8/25, 2:53 AM openrouter.ai/docs/llms-full.txt
'San Francisco, CA': {
celsius: '18°C',
fahrenheit: '64°F',
},
};

const weather = weatherData[location];


if (!weather) {
return `Weather data for ${location} is not available.`;
}

return `The current weather in ${location} is ${weather[unit]}.`;


},
},
},
});

await response.consumeStream();
return response.text;
};
```
</CodeGroup>

https://openrouter.ai/docs/llms-full.txt 147/147

You might also like