OpenAI Compatible Server - VLLM
OpenAI Compatible Server - VLLM
To call the server, you can use the official OpenAI Python client library, or any other HTTP
client.
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="token-abc123",
)
completion = client.chat.completions.create(
model="NousResearch/Meta-Llama-3-8B-Instruct",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(completion.choices[0].message)
Ask AI
latest
Please see the OpenAI API Reference for more information on the API. We support all
parameters except:
Chat: tools , and tool_choice .
Completions: suffix .
vLLM also provides experimental support for OpenAI Vision API compatible inference. See
more details in Using VLMs.
Extra Parameters
vLLM supports a set of parameters that are not part of the OpenAI API. In order to use
them, you can pass them as extra parameters in the OpenAI client. Or directly merge them
into the JSON payload if you are using HTTP call directly.
completion = client.chat.completions.create(
model="NousResearch/Meta-Llama-3-8B-Instruct",
messages=[
{"role": "user", "content": "Classify this sentiment: vLLM is wonderful!"
],
extra_body={
"guided_choice": ["positive", "negative"]
}
)
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 3/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM
)
guided_regex: Optional[str] = Field(
default=None,
description=(
"If specified, the output will follow the regex pattern."),
)
guided_choice: Optional[List[str]] = Field(
default=None,
description=(
"If specified, the output will be exactly one of the choices."),
)
guided_grammar: Optional[str] = Field(
default=None,
description=(
"If specified, the output will follow the context free grammar.")
)
guided_decoding_backend: Optional[str] = Field(
default=None,
description=(
"If specified, will override the default guided decoding backend
"of the server for this specific request. If set, must be either
"'outlines' / 'lm-format-enforcer'"))
guided_whitespace_pattern: Optional[str] = Field(
default=None,
description=(
"If specified, will override the default whitespace pattern "
"for guided json decoding."))
priority: int = Field(
default=0,
description=(
"The priority of the request (lower means earlier handling; "
"default: 0). Any priority other than 0 will raise an error "
"if the served model does not use priority scheduling."))
Ask AI
latest
In order for the language model to support chat protocol, vLLM requires the model to
include a chat template in its tokenizer configuration. The chat template is a Jinja2
template that specifies how are roles, messages, and other chat-specific tokens are
encoded in the input.
An example chat template for NousResearch/Meta-Llama-3-8B-Instruct can be found
here
Some models do not provide a chat template even though they are instruction/chat fine-
tuned. For those model, you can manually specify their chat template in the
--chat-template parameter with the file path to the chat template, or the template in
string form. Without a chat template, the server will not be able to process chat and all chat
requests will error.
vllm serve <model> --chat-template ./path-to-chat-template.jinja
vLLM community provides a set of chat templates for popular models. You can find them in
the examples directory here
Named Arguments
--host
host name
--port
port number
Default: 8000
--uvicorn-log-level
allow credentials
Default: False
--allowed-origins
allowed origins
Default: [‘*’]
--allowed-methods
allowed methods
Default: [‘*’]
--allowed-headers Ask AI
allowed headers latest
--api-key
If provided, the server will require this key to be presented in the header.
--lora-modules
LoRA module configurations in either ‘name=path’ formator JSON format. Example (old
format): ‘name=path’ Example (new format): ‘{“name”: “name”, “local_path”: “path”,
“base_model_name”: “id”}’
--prompt-adapters
The file path to the chat template, or the template in single-line form for the specified
model
--response-role
If specified, will run the OpenAI frontend server in the same process as the model
serving engine.
Default: False
--enable-auto-tool-choice
Enable auto tool choice for supported models. Use –tool-call-parserto specify which
parser to use
Default: False
--tool-call-parser
Select the tool call parser depending on the model that you’re using. This is used to
parse the model-generated tool call into OpenAI API format. Required for –enable-
auto-tool-choice.
--tool-parser-plugin
Special the tool parser plugin write to parse the model-generated tool into OpenAI API
format, the name register in this plugin can be used in –tool-call-parser.
Default: “”
--model
--skip-tokenizer-init
The specific model version to use. It can be a branch name, a tag name, or a commit id.
If unspecified, will use the default version.
--code-revision
The specific revision to use for the model code on Hugging Face Hub. It can be a
branch name, a tag name, or a commit id. If unspecified, will use the default version.
--tokenizer-revision
Revision of the huggingface tokenizer to use. It can be a branch name, a tag name, or a
commit id. If unspecified, will use the default version.
--tokenizer-mode
Directory to download and load the weights, default to the default cache dir of
huggingface.
--load-format
“auto” will try to load the weights in the safetensors format and fall back to the
pytorch bin format if safetensors format is not available.
“pt” will load the weights in the pytorch bin format.
“safetensors” will load the weights in the safetensors format.
“npcache” will load the weights in pytorch format and store a numpy cache to
speed up the loading.
“dummy” will initialize the weights with random values, which is mainly for
profiling.
“tensorizer” will load the weights using tensorizer from CoreWeave. See the
Tensorize vLLM Model script in the Examples section for more information.
“bitsandbytes” will load the weights using bitsandbytes quantization.
Default: “auto”
--config-format
Data type for kv cache storage. If “auto”, will use model data type. CUDA 11.8+
supports fp8 (=fp8_e4m3) and fp8_e5m2. ROCm (AMD GPU) supports fp8
(=fp8_e4m3)
Default: “auto”
--quantization-param-path
Path to the JSON file containing the KV cache scaling factors. This should generally be
supplied, when KV cache dtype is FP8. Otherwise, KV cache scaling factors default to
1.0, which may cause accuracy issues. FP8_E5M2 (without scaling) is only supported
on cuda versiongreater than 11.8. On ROCm (AMD GPU), FP8_E4M3 is instead
supported for common inference criteria.
--max-model-len
Model context length. If unspecified, will be automatically derived from the model
config.
--guided-decoding-backend
--tensor-parallel-size, -tp
Skip to main content
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 13/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM
Load model sequentially in multiple batches, to avoid RAM OOM when using tensor
parallel and large models.
--ray-workers-use-nsight
--seed
The space in GiB to offload to CPU, per GPU. Default is 0, which means no offloading.
Intuitively, this argument can be seen as a virtual way to increase the GPU memory
size. For example, if you have one 24 GB GPU and set this to 10, virtually you can think
of it as a 34 GB GPU. Then you can load a 13B model with BF16 weight,which requires
at least 26GB GPU memory. Note that this requires fast CPU-GPU interconnect, as part
of the model isloaded from CPU memory to GPU memory on the fly in each model
forward pass.
Default: 0
--gpu-memory-utilization
The fraction of GPU memory to be used for the model executor, which can range from
0 to 1. For example, a value of 0.5 would imply 50% GPU memory utilization. If
unspecified, will use the default value of 0.9.
Default: 0.9
--num-gpu-blocks-override
If specified, ignore GPU profiling result and use this numberof GPU blocks. Used for
testing preemption.
--max-num-batched-tokens
Default: 20
--disable-log-stats
RoPE theta. Use with rope_scaling. In some cases, changing the RoPE theta improves
the performance of the scaled model.
--enforce-eager
Always use eager-mode PyTorch. If False, will use eager mode and CUDA graph in
hybrid for maximal performance and flexibility.
Default: False
--max-context-len-to-capture
Maximum context length covered by CUDA graphs. When a sequence has context
length larger than this, we fall back to eager mode. (DEPRECATED. Use –max-seq-len-
to-capture instead)
--max-seq-len-to-capture
Maximum sequence length covered by CUDA graphs. When a sequence has context
length larger than this, we fall back to eager mode. Additionally for encoder-decoder
models, if the sequence length of the encoder input is larger than this, we fall back to
the eager mode. Ask AI
latest
--disable-custom-all-reduce
See ParallelConfig.
Default: False
--tokenizer-pool-size
Size of tokenizer pool to use for asynchronous tokenization. If 0, will use synchronous
tokenization.
Default: 0
--tokenizer-pool-type
Extra config for tokenizer pool. This should be a JSON string that will be parsed into a
dictionary. Ignored if tokenizer_pool_size is 0.
--limit-mm-per-prompt
For each multimodal plugin, limit how many input instances to allow for each prompt.
Expects a comma-separated list of items, e.g.: image=16,video=2 allows a maximum of
16 images and 2 videos per prompt. Defaults to 1 for each modality.
--mm-processor-kwargs
Default: 16
--lora-extra-vocab-size
Maximum size of extra vocabulary that can be present in a LoRA adapter (added to the
base model vocabulary).
Default: 256
--lora-dtype
Specify multiple scaling factors (which can be different from base model scaling factor
- see eg. Long LoRA) to allow for multiple LoRA adapters trained with those scaling
factors to be used at the same time. If not specified, only adapters trained with the
base model scaling factor are allowed.
--max-cpu-loras
Maximum number of LoRAs to store in CPU memory. Must be >= than max_num_seqs.
Defaults to max_num_seqs.
--fully-sharded-loras
By default, only half of the LoRA computation is sharded with tensor parallelism.
Enabling this will use the fully sharded layers. At high sequence length, max rank or
tensor parallel size, this is likely faster.
Default: False
--enable-prompt-adapter
--max-prompt-adapter-token
Skip to main content
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 18/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM
If False, then multi-step will stream outputs at the end of all steps
Default: True
--scheduler-delay-factor
Apply a delay (of delay factor multiplied by previous prompt latency) before scheduling
next prompt.
Default: 0.0
--enable-chunked-prefill
weights.
Skip to main content
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 19/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM
--num-speculative-tokens
The number of speculative tokens to sample from the draft model in speculative
decoding.
--speculative-disable-mqa-scorer
If set to True, the MQA scorer will be disabled in speculative and fall back to batch
expansion
Default: False
--speculative-draft-tensor-parallel-size, -spec-draft-tp
Number of tensor parallel replicas for the draft model in speculative decoding.
--speculative-max-model-len
The maximum sequence length supported by the draft model. Sequences over this
length will skip speculation.
--speculative-disable-by-batch-size
Disable speculative decoding for new incoming requests if the number of enqueue
requests is larger than this value.
--ngram-prompt-lookup-max
--typical-acceptance-sampler-posterior-alpha
A scaling factor for the entropy-based threshold for token acceptance in the
TypicalAcceptanceSampler. Typically defaults to sqrt of –typical-acceptance-sampler-
posterior-threshold i.e. 0.3
--disable-logprobs-during-spec-decoding
If set to True, token log probabilities are not returned during speculative decoding. If
set to False, log probabilities are returned according to the settings in
SamplingParams. If not specified, it defaults to True. Disabling log probabilities during
speculative decoding reduces latency by skipping logprob calculation in proposal
sampling, target sampling, and after accepted tokens are determined.
--model-loader-extra-config
Extra config for model loader. This will be passed to the model loader corresponding to
the chosen load_format. This should be a JSON string that will be parsed into a
dictionary.
--ignore-patterns
The model name(s) used in the API. If multiple names are provided, the server will
respond to any of the provided names. The model name in the model field of a
response will be the first name in this list. If not specified, the model name will be the
same as the –model argument. Noted that this name(s)will also be used in
model_name tag content of prometheus metrics, if multiple names provided,
metricstag will take the first one.
--qlora-adapter-name-or-path
--collect-detailed-traces
Valid choices are model,worker,all. It makes sense to set this only if –otlp-traces-
endpoint is set. If set, it will collect detailed traces for the specified modules. This
involves use of possibly costly and or blocking operations and hence might have a
performance impact.
--disable-async-output-proc
Ask AI
latest
Config file
The serve module can also accept arguments from a config file in yaml format. The
arguments in the yaml must be specified using the long form of the argument outlined
here:
For example:
# config.yaml
host: "127.0.0.1"
port: 6379
uvicorn-log-level: "info"
NOTE
In case an argument is supplied simultaneously using command line and the config file, the
value from the commandline will take precedence. The order of priorities is command line
> config file values > defaults .
It is the callers responsibility to prompt the model with the tool information, vLLM will not
automatically manipulate the prompt.
vLLM will use guided decoding to ensure the response matches the tool parameter object
defined by the JSON schema in the tools parameter.
--chat-template – optional for auto tool choice. the path to the chat template which
handles tool -role messages and assistant -role messages that contain previously
generated tool calls. Hermes, Mistral and Llama models have tool-compatible chat
templates in their tokenizer_config.json files, but you can specify a custom
template. This argument can be set to tool_use if your model has a tool use-specific
chat template configured in the tokenizer_config.json . In this case, it will be used
per the transformers specification. More on this here from HuggingFace; and you
can find an example of this in a tokenizer_config.json here
If your favorite tool-calling model is not supported, please feel free to contribute a parser &
tool use chat template!
Hermes Models
All Nous Research Hermes-series models newer than Hermes 2 Pro should be supported.
NousResearch/Hermes-2-Pro-*
NousResearch/Hermes-2-Theta-* Ask AI
latest
NousResearch/Hermes-3-*
Note that the Hermes 2 Theta models are known to have degraded tool call quality &
capabilities due to the merge step in their creation.
Flags: --tool-call-parser hermes
Mistral Models
Supported models:
mistralai/Mistral-7B-Instruct-v0.3 (confirmed)
Additional mistral function-calling models are compatible as well.
Known issues:
1. Mistral 7B struggles to generate parallel tool calls correctly.
2. Mistral’s tokenizer_config.json chat template requires tool call IDs that are exactly
9 digits, which is much shorter than what vLLM generates. Since an exception is
thrown when this condition is not met, the following additional chat templates are
provided:
examples/tool_chat_template_mistral.jinja - this is the “official” Mistral chat
template, but tweaked so that it works with vLLM’s tool call IDs (provided
tool_call_id fields are truncated to the last 9 digits)
Llama Models
Supported models:
meta-llama/Meta-Llama-3.1-8B-Instruct
meta-llama/Meta-Llama-3.1-70B-Instruct
meta-llama/Meta-Llama-3.1-405B-Instruct
meta-llama/Meta-Llama-3.1-405B-Instruct-FP8 Ask AI
latest
The tool calling that is supported is the JSON based tool calling. Other tool calling formats
like the built in python tool callingSkip to maintool
or custom content
calling are not supported.
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 25/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM
Known issues:
1. Parallel tool calls are not supported.
2. The model can generate parameters with a wrong format, such as generating an array
serialized as string instead of an array.
The tool_chat_template_llama3_json.jinja file contains the “official” Llama chat
template, but tweaked so that it works better with vLLM.
Recommended flags: --tool-call-parser llama3_json --chat-template
examples/tool_chat_template_llama3_json.jinja
Internlm Models
Supported models:
internlm/internlm2_5-7b-chat (confirmed)
Additional internlm2.5 function-calling models are compatible as well
Known issues:
Although this implementation also supports Internlm2, the tool call results are not
stable when testing with the internlm/internlm2-chat-7b model.
Recommended flags: --tool-call-parser internlm --chat-template
examples/tool_chat_template_internlm2_tool.jinja
Then you can use this plugin in the command line like this.
--enable-auto-tool-choice \
--tool-parser-plugin <absolute path of the plugin file>
--tool-call-parser example \
--chat-template <your chat template> \
Previous Next
Tensorize vLLM Model Deploying with Docker
Ask AI
latest
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 27/27