0% found this document useful (0 votes)
604 views27 pages

OpenAI Compatible Server - VLLM

OpenAI server
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
604 views27 pages

OpenAI Compatible Server - VLLM

OpenAI server
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

11/10/2024, 18:35 OpenAI Compatible Server — vLLM

OpenAI Compatible Server


Contents
API Reference
Extra Parameters
Chat Template
Command line arguments for the server
Tool Calling in the Chat Completion API
Tool calling in the chat completion API
vLLM provides an HTTP server that implements OpenAI’s Completions and Chat API.
You can start the server using Python, or using Docker:
vllm serve NousResearch/Meta-Llama-3-8B-Instruct --dtype auto --api-key token

To call the server, you can use the official OpenAI Python client library, or any other HTTP
client.
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="token-abc123",
)

completion = client.chat.completions.create(
model="NousResearch/Meta-Llama-3-8B-Instruct",
messages=[
{"role": "user", "content": "Hello!"}
]
)

print(completion.choices[0].message)

Ask AI
latest

Skip to main content


https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 1/27
API Reference
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

Please see the OpenAI API Reference for more information on the API. We support all
parameters except:
Chat: tools , and tool_choice .
Completions: suffix .
vLLM also provides experimental support for OpenAI Vision API compatible inference. See
more details in Using VLMs.

Extra Parameters
vLLM supports a set of parameters that are not part of the OpenAI API. In order to use
them, you can pass them as extra parameters in the OpenAI client. Or directly merge them
into the JSON payload if you are using HTTP call directly.
completion = client.chat.completions.create(
model="NousResearch/Meta-Llama-3-8B-Instruct",
messages=[
{"role": "user", "content": "Classify this sentiment: vLLM is wonderful!"
],
extra_body={
"guided_choice": ["positive", "negative"]
}
)

Extra Parameters for Chat API


The following sampling parameters (click through to see documentation) are supported.
best_of: Optional[int] = None
use_beam_search: bool = False
top_k: int = -1
min_p: float = 0.0
repetition_penalty: float = 1.0
length_penalty: float = 1.0
stop_token_ids: Optional[List[int]] = Field(default_factory=list)
include_stop_str_in_output: bool = False
ignore_eos: bool = False
min_tokens: int = 0
Ask AI
skip_special_tokens: bool = True
latest
spaces_between_special_tokens: bool = True
truncate_prompt_tokens: Optional[Annotated[int, Field(ge=1)]] = None
prompt_logprobs: Optional[int] = None
Skip to main content
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 2/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

The following extra parameters are supported:


echo: bool = Field(
default=False,
description=(
"If true, the new message will be prepended with the last message
"if they belong to the same role."),
)
add_generation_prompt: bool = Field(
default=True,
description=
("If true, the generation prompt will be added to the chat template.
"This is a parameter used by chat template in tokenizer config of th
"model."),
)
continue_final_message: bool = Field(
default=False,
description=
("If this is set, the chat will be formatted so that the final "
"message in the chat is open-ended, without any EOS tokens. The "
"model will continue this message rather than starting a new one. "
"This allows you to \"prefill\" part of the model's response for it
"Cannot be used at the same time as `add_generation_prompt`."),
)
add_special_tokens: bool = Field(
default=False,
description=(
"If true, special tokens (e.g. BOS) will be added to the prompt "
"on top of what is added by the chat template. "
"For most models, the chat template takes care of adding the "
"special tokens so this should be set to false (as is the "
"default)."),
)
documents: Optional[List[Dict[str, str]]] = Field(
default=None,
description=
("A list of dicts representing documents that will be accessible to "
"the model if it is performing RAG (retrieval-augmented generation)
" If the template does not support RAG, this argument will have no "
"effect. We recommend that each document should be a dict containing
"\"title\" and \"text\" keys."),
)
chat_template: Optional[str] = Field(
default=None,
description=(
"A Jinja template to use for this conversion. "
"As of transformers v4.44, default chat template is no longer "
"allowed, so you must provide a chat template if the tokenizer "
"does not define one."),
)
chat_template_kwargs: Optional[Dict[str, Any]] = Field(
default=None,
description=("Additional kwargs to pass to the template renderer. "
"Will be accessible by the chat template."),
Ask AI
) latest
guided_json: Optional[Union[str, dict, BaseModel]] = Field(
default=None,
Skip to main content
description=("If specified, the output will follow the JSON schema.")

https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 3/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM
)
guided_regex: Optional[str] = Field(
default=None,
description=(
"If specified, the output will follow the regex pattern."),
)
guided_choice: Optional[List[str]] = Field(
default=None,
description=(
"If specified, the output will be exactly one of the choices."),
)
guided_grammar: Optional[str] = Field(
default=None,
description=(
"If specified, the output will follow the context free grammar.")
)
guided_decoding_backend: Optional[str] = Field(
default=None,
description=(
"If specified, will override the default guided decoding backend
"of the server for this specific request. If set, must be either
"'outlines' / 'lm-format-enforcer'"))
guided_whitespace_pattern: Optional[str] = Field(
default=None,
description=(
"If specified, will override the default whitespace pattern "
"for guided json decoding."))
priority: int = Field(
default=0,
description=(
"The priority of the request (lower means earlier handling; "
"default: 0). Any priority other than 0 will raise an error "
"if the served model does not use priority scheduling."))

Extra Parameters for Completions API


The following sampling parameters (click through to see documentation) are supported.
use_beam_search: bool = False
top_k: int = -1
min_p: float = 0.0
repetition_penalty: float = 1.0
length_penalty: float = 1.0
stop_token_ids: Optional[List[int]] = Field(default_factory=list)
include_stop_str_in_output: bool = False
ignore_eos: bool = False
min_tokens: int = 0
skip_special_tokens: bool = True
spaces_between_special_tokens: bool = True
Ask AI
truncate_prompt_tokens: Optional[Annotated[int, Field(ge=1)]] = None
allowed_token_ids: Optional[List[int]] = None latest
prompt_logprobs: Optional[int] = None

Skip to main content


https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 4/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

The following extra parameters are supported:


add_special_tokens: bool = Field(
default=True,
description=(
"If true (the default), special tokens (e.g. BOS) will be added t
"the prompt."),
)
response_format: Optional[ResponseFormat] = Field(
default=None,
description=
("Similar to chat completion, this parameter specifies the format of
"output. Only {'type': 'json_object'} or {'type': 'text' } is "
"supported."),
)
guided_json: Optional[Union[str, dict, BaseModel]] = Field(
default=None,
description="If specified, the output will follow the JSON schema.",
)
guided_regex: Optional[str] = Field(
default=None,
description=(
"If specified, the output will follow the regex pattern."),
)
guided_choice: Optional[List[str]] = Field(
default=None,
description=(
"If specified, the output will be exactly one of the choices."),
)
guided_grammar: Optional[str] = Field(
default=None,
description=(
"If specified, the output will follow the context free grammar.")
)
guided_decoding_backend: Optional[str] = Field(
default=None,
description=(
"If specified, will override the default guided decoding backend
"of the server for this specific request. If set, must be one of
"'outlines' / 'lm-format-enforcer'"))
guided_whitespace_pattern: Optional[str] = Field(
default=None,
description=(
"If specified, will override the default whitespace pattern "
"for guided json decoding."))
priority: int = Field(
default=0,
description=(
"The priority of the request (lower means earlier handling; "
"default: 0). Any priority other than 0 will raise an error "
"if the served model does not use priority scheduling."))

Ask AI
latest

Skip to main content


https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 5/27
Chat Template
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

In order for the language model to support chat protocol, vLLM requires the model to
include a chat template in its tokenizer configuration. The chat template is a Jinja2
template that specifies how are roles, messages, and other chat-specific tokens are
encoded in the input.
An example chat template for NousResearch/Meta-Llama-3-8B-Instruct can be found
here
Some models do not provide a chat template even though they are instruction/chat fine-
tuned. For those model, you can manually specify their chat template in the
--chat-template parameter with the file path to the chat template, or the template in
string form. Without a chat template, the server will not be able to process chat and all chat
requests will error.
vllm serve <model> --chat-template ./path-to-chat-template.jinja

vLLM community provides a set of chat templates for popular models. You can find them in
the examples directory here

Command line arguments for the server


usage: vllm serve [-h] [--host HOST] [--port PORT]
[--uvicorn-log-level {debug,info,warning,error,critical,tra
[--allow-credentials] [--allowed-origins ALLOWED_ORIGINS]
[--allowed-methods ALLOWED_METHODS]
[--allowed-headers ALLOWED_HEADERS] [--api-key API_KEY]
[--lora-modules LORA_MODULES [LORA_MODULES ...]]
[--prompt-adapters PROMPT_ADAPTERS [PROMPT_ADAPTERS ...]]
[--chat-template CHAT_TEMPLATE]
[--response-role RESPONSE_ROLE] [--ssl-keyfile SSL_KEYFILE]
[--ssl-certfile SSL_CERTFILE] [--ssl-ca-certs SSL_CA_CERTS]
[--ssl-cert-reqs SSL_CERT_REQS] [--root-path ROOT_PATH]
[--middleware MIDDLEWARE] [--return-tokens-as-token-ids]
[--disable-frontend-multiprocessing]
[--enable-auto-tool-choice]
[--tool-call-parser {hermes,internlm,llama3_json,mistral} o
[--tool-parser-plugin TOOL_PARSER_PLUGIN] [--model MODEL]
[--tokenizer TOKENIZER] [--skip-tokenizer-init]
[--revision REVISION] [--code-revision CODE_REVISION]
[--tokenizer-revision TOKENIZER_REVISION]
Ask AI
[--tokenizer-mode {auto,slow,mistral}] [--trust-remote-code
latest
[--download-dir DOWNLOAD_DIR]
[--load-format {auto,pt,safetensors,npcache,dummy,tensorize
Skip to main content
[--config-format {auto,hf,mistral}]
[--dtype {auto,half,float16,bfloat16,float,float32}]
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 6/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM
[--kv-cache-dtype {auto,fp8,fp8_e5m2,fp8_e4m3}]
[--quantization-param-path QUANTIZATION_PARAM_PATH]
[--max-model-len MAX_MODEL_LEN]
[--guided-decoding-backend {outlines,lm-format-enforcer}]
[--distributed-executor-backend {ray,mp}] [--worker-use-ray
[--pipeline-parallel-size PIPELINE_PARALLEL_SIZE]
[--tensor-parallel-size TENSOR_PARALLEL_SIZE]
[--max-parallel-loading-workers MAX_PARALLEL_LOADING_WORKER
[--ray-workers-use-nsight] [--block-size {8,16,32}]
[--enable-prefix-caching] [--disable-sliding-window]
[--use-v2-block-manager]
[--num-lookahead-slots NUM_LOOKAHEAD_SLOTS] [--seed SEED]
[--swap-space SWAP_SPACE] [--cpu-offload-gb CPU_OFFLOAD_GB]
[--gpu-memory-utilization GPU_MEMORY_UTILIZATION]
[--num-gpu-blocks-override NUM_GPU_BLOCKS_OVERRIDE]
[--max-num-batched-tokens MAX_NUM_BATCHED_TOKENS]
[--max-num-seqs MAX_NUM_SEQS] [--max-logprobs MAX_LOGPROBS]
[--disable-log-stats]
[--quantization {aqlm,awq,deepspeedfp,tpu_int8,fp8,fbgemm_f
[--rope-scaling ROPE_SCALING] [--rope-theta ROPE_THETA]
[--enforce-eager]
[--max-context-len-to-capture MAX_CONTEXT_LEN_TO_CAPTURE]
[--max-seq-len-to-capture MAX_SEQ_LEN_TO_CAPTURE]
[--disable-custom-all-reduce]
[--tokenizer-pool-size TOKENIZER_POOL_SIZE]
[--tokenizer-pool-type TOKENIZER_POOL_TYPE]
[--tokenizer-pool-extra-config TOKENIZER_POOL_EXTRA_CONFIG]
[--limit-mm-per-prompt LIMIT_MM_PER_PROMPT]
[--mm-processor-kwargs MM_PROCESSOR_KWARGS] [--enable-lora]
[--max-loras MAX_LORAS] [--max-lora-rank MAX_LORA_RANK]
[--lora-extra-vocab-size LORA_EXTRA_VOCAB_SIZE]
[--lora-dtype {auto,float16,bfloat16,float32}]
[--long-lora-scaling-factors LONG_LORA_SCALING_FACTORS]
[--max-cpu-loras MAX_CPU_LORAS] [--fully-sharded-loras]
[--enable-prompt-adapter]
[--max-prompt-adapters MAX_PROMPT_ADAPTERS]
[--max-prompt-adapter-token MAX_PROMPT_ADAPTER_TOKEN]
[--device {auto,cuda,neuron,cpu,openvino,tpu,xpu}]
[--num-scheduler-steps NUM_SCHEDULER_STEPS]
[--multi-step-stream-outputs [MULTI_STEP_STREAM_OUTPUTS]]
[--scheduler-delay-factor SCHEDULER_DELAY_FACTOR]
[--enable-chunked-prefill [ENABLE_CHUNKED_PREFILL]]
[--speculative-model SPECULATIVE_MODEL]
[--speculative-model-quantization {aqlm,awq,deepspeedfp,tpu
[--num-speculative-tokens NUM_SPECULATIVE_TOKENS]
[--speculative-disable-mqa-scorer]
[--speculative-draft-tensor-parallel-size SPECULATIVE_DRAFT
[--speculative-max-model-len SPECULATIVE_MAX_MODEL_LEN]
[--speculative-disable-by-batch-size SPECULATIVE_DISABLE_BY
[--ngram-prompt-lookup-max NGRAM_PROMPT_LOOKUP_MAX]
[--ngram-prompt-lookup-min NGRAM_PROMPT_LOOKUP_MIN]
[--spec-decoding-acceptance-method {rejection_sampler,typic
[--typical-acceptance-sampler-posterior-threshold TYPICAL_A
[--typical-acceptance-sampler-posterior-alpha TYPICAL_ACCEP
[--disable-logprobs-during-spec-decoding [DISABLE_LOGPROBS_
Ask AI
[--model-loader-extra-config MODEL_LOADER_EXTRA_CONFIG]
[--ignore-patterns IGNORE_PATTERNS] latest
[--preemption-mode PREEMPTION_MODE]

Skip to main content


[--served-model-name SERVED_MODEL_NAME [SERVED_MODEL_NAME
[--qlora-adapter-name-or-path QLORA_ADAPTER_NAME_OR_PATH]
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 7/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM
[--otlp-traces-endpoint OTLP_TRACES_ENDPOINT]
[--collect-detailed-traces COLLECT_DETAILED_TRACES]
[--disable-async-output-proc]
[--override-neuron-config OVERRIDE_NEURON_CONFIG]
[--scheduling-policy {fcfs,priority}]
[--disable-log-requests] [--max-log-len MAX_LOG_LEN]
[--disable-fastapi-docs]

Named Arguments
--host

host name
--port

port number
Default: 8000
--uvicorn-log-level

Possible choices: debug, info, warning, error, critical, trace


log level for uvicorn
Default: “info”
--allow-credentials

allow credentials
Default: False
--allowed-origins

allowed origins
Default: [‘*’]
--allowed-methods

allowed methods
Default: [‘*’]
--allowed-headers Ask AI
allowed headers latest

Default: [‘*’] Skip to main content


https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 8/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

--api-key

If provided, the server will require this key to be presented in the header.
--lora-modules

LoRA module configurations in either ‘name=path’ formator JSON format. Example (old
format): ‘name=path’ Example (new format): ‘{“name”: “name”, “local_path”: “path”,
“base_model_name”: “id”}’
--prompt-adapters

Prompt adapter configurations in the format name=path. Multiple adapters can be


specified.
--chat-template

The file path to the chat template, or the template in single-line form for the specified
model
--response-role

The role name to return if request.add_generation_prompt=true.


Default: assistant
--ssl-keyfile

The file path to the SSL key file


--ssl-certfile

The file path to the SSL cert file


--ssl-ca-certs

The CA certificates file


--ssl-cert-reqs

Whether client certificate is required (see stdlib ssl module’s)


Default: 0
--root-path

FastAPI root_path when app is behind a path based routing proxy


--middleware

Additional ASGI middleware to apply to the app. We accept multiple –middleware


Ask AI
latest
arguments. The value should be an import path. If a function is provided, vLLM will add
Skip to main content
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 9/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

it to the server using @app.middleware(‘http’). If a class is provided, vLLM will add it to


the server using app.add_middleware().
Default: []
--return-tokens-as-token-ids

When –max-logprobs is specified, represents single tokens as strings of the form


‘token_id:{token_id}’ so that tokens that are not JSON-encodable can be identified.
Default: False
--disable-frontend-multiprocessing

If specified, will run the OpenAI frontend server in the same process as the model
serving engine.
Default: False
--enable-auto-tool-choice

Enable auto tool choice for supported models. Use –tool-call-parserto specify which
parser to use
Default: False
--tool-call-parser

Select the tool call parser depending on the model that you’re using. This is used to
parse the model-generated tool call into OpenAI API format. Required for –enable-
auto-tool-choice.
--tool-parser-plugin

Special the tool parser plugin write to parse the model-generated tool into OpenAI API
format, the name register in this plugin can be used in –tool-call-parser.
Default: “”
--model

Name or path of the huggingface model to use.


Default: “facebook/opt-125m”
--tokenizer

Name or path of the huggingface tokenizer to use. If unspecified, modelAskname


AI
orlatest
path
will be used.
Skip to main content
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 10/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

--skip-tokenizer-init

Skip initialization of tokenizer and detokenizer


Default: False
--revision

The specific model version to use. It can be a branch name, a tag name, or a commit id.
If unspecified, will use the default version.
--code-revision

The specific revision to use for the model code on Hugging Face Hub. It can be a
branch name, a tag name, or a commit id. If unspecified, will use the default version.
--tokenizer-revision

Revision of the huggingface tokenizer to use. It can be a branch name, a tag name, or a
commit id. If unspecified, will use the default version.
--tokenizer-mode

Possible choices: auto, slow, mistral


The tokenizer mode.
“auto” will use the fast tokenizer if available.
“slow” will always use the slow tokenizer.
“mistral” will always use the mistral_common tokenizer.
Default: “auto”
--trust-remote-code

Trust remote code from huggingface.


Default: False
--download-dir

Directory to download and load the weights, default to the default cache dir of
huggingface.
--load-format

Possible choices: auto, pt, safetensors, npcache, dummy, tensorizer, sharded_state,


gguf, bitsandbytes, mistral Ask AI
latest

The format of the model weights to load.


Skip to main content
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 11/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

“auto” will try to load the weights in the safetensors format and fall back to the
pytorch bin format if safetensors format is not available.
“pt” will load the weights in the pytorch bin format.
“safetensors” will load the weights in the safetensors format.
“npcache” will load the weights in pytorch format and store a numpy cache to
speed up the loading.
“dummy” will initialize the weights with random values, which is mainly for
profiling.
“tensorizer” will load the weights using tensorizer from CoreWeave. See the
Tensorize vLLM Model script in the Examples section for more information.
“bitsandbytes” will load the weights using bitsandbytes quantization.
Default: “auto”
--config-format

Possible choices: auto, hf, mistral


The format of the model config to load.
“auto” will try to load the config in hf format if available else it will try to load in
mistral format
Default: “auto”
--dtype

Possible choices: auto, half, float16, bfloat16, float, float32


Data type for model weights and activations.
“auto” will use FP16 precision for FP32 and FP16 models, and BF16 precision for
BF16 models.
“half” for FP16. Recommended for AWQ quantization.
“float16” is the same as “half”.
“bfloat16” for a balance between precision and range.
“float” is shorthand for FP32 precision.
“float32” for FP32 precision.
Default: “auto”
Ask AI
--kv-cache-dtype latest
Possible choices: auto, fp8, fp8_e5m2, fp8_e4m3
Skip to main content
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 12/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

Data type for kv cache storage. If “auto”, will use model data type. CUDA 11.8+
supports fp8 (=fp8_e4m3) and fp8_e5m2. ROCm (AMD GPU) supports fp8
(=fp8_e4m3)
Default: “auto”
--quantization-param-path

Path to the JSON file containing the KV cache scaling factors. This should generally be
supplied, when KV cache dtype is FP8. Otherwise, KV cache scaling factors default to
1.0, which may cause accuracy issues. FP8_E5M2 (without scaling) is only supported
on cuda versiongreater than 11.8. On ROCm (AMD GPU), FP8_E4M3 is instead
supported for common inference criteria.
--max-model-len

Model context length. If unspecified, will be automatically derived from the model
config.
--guided-decoding-backend

Possible choices: outlines, lm-format-enforcer


Which engine will be used for guided decoding (JSON schema / regex etc) by default.
Currently support  outlines-dev/outlines and  noamgat/lm-format-enforcer. Can be
overridden per request via guided_decoding_backend parameter.
Default: “outlines”
--distributed-executor-backend

Possible choices: ray, mp


Backend to use for distributed serving. When more than 1 GPU is used, will be
automatically set to “ray” if installed or “mp” (multiprocessing) otherwise.
--worker-use-ray

Deprecated, use –distributed-executor-backend=ray.


Default: False
--pipeline-parallel-size, -pp

Number of pipeline stages.


Ask AI
Default: 1 latest

--tensor-parallel-size, -tp
Skip to main content
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 13/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

Number of tensor parallel replicas.


Default: 1
--max-parallel-loading-workers

Load model sequentially in multiple batches, to avoid RAM OOM when using tensor
parallel and large models.
--ray-workers-use-nsight

If specified, use nsight to profile Ray workers.


Default: False
--block-size

Possible choices: 8, 16, 32


Token block size for contiguous chunks of tokens. This is ignored on neuron devices
and set to max-model-len
Default: 16
Print--enable-prefix-caching
to PDF
Enables automatic prefix caching.
Default: False
--disable-sliding-window

Disables sliding window, capping to sliding window size


Default: False
--use-v2-block-manager

Use BlockSpaceMangerV2. By default this is set to True. Set to False to use


BlockSpaceManagerV1
Default: True
--num-lookahead-slots

Experimental scheduling config necessary for speculative decoding. This will be


replaced by speculative config in the future; it is present to enable correctness tests
until then. Ask AI
latest

Default: 0 Skip to main content


https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 14/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

--seed

Random seed for operations.


Default: 0
--swap-space

CPU swap space size (GiB) per GPU.


Default: 4
--cpu-offload-gb

The space in GiB to offload to CPU, per GPU. Default is 0, which means no offloading.
Intuitively, this argument can be seen as a virtual way to increase the GPU memory
size. For example, if you have one 24 GB GPU and set this to 10, virtually you can think
of it as a 34 GB GPU. Then you can load a 13B model with BF16 weight,which requires
at least 26GB GPU memory. Note that this requires fast CPU-GPU interconnect, as part
of the model isloaded from CPU memory to GPU memory on the fly in each model
forward pass.
Default: 0
--gpu-memory-utilization

The fraction of GPU memory to be used for the model executor, which can range from
0 to 1. For example, a value of 0.5 would imply 50% GPU memory utilization. If
unspecified, will use the default value of 0.9.
Default: 0.9
--num-gpu-blocks-override

If specified, ignore GPU profiling result and use this numberof GPU blocks. Used for
testing preemption.
--max-num-batched-tokens

Maximum number of batched tokens per iteration.


--max-num-seqs

Maximum number of sequences per iteration.


Default: 256 Ask AI
latest
--max-logprobs

Max number of log probs to return


Skip tologprobs is specified in SamplingParams.
main content
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 15/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

Default: 20
--disable-log-stats

Disable logging statistics.


Default: False
--quantization, -q

Possible choices: aqlm, awq, deepspeedfp, tpu_int8, fp8, fbgemm_fp8, modelopt,


marlin, gguf, gptq_marlin_24, gptq_marlin, awq_marlin, gptq, compressed-tensors,
bitsandbytes, qqq, experts_int8, neuron_quant, ipex, None
Method used to quantize the weights. If None, we first check the quantization_config
attribute in the model config file. If that is None, we assume the model weights are not
quantized and use dtype to determine the data type of the weights.
--rope-scaling

RoPE scaling configuration in JSON format. For example,


{“type”:”dynamic”,”factor”:2.0}
--rope-theta

RoPE theta. Use with rope_scaling. In some cases, changing the RoPE theta improves
the performance of the scaled model.
--enforce-eager

Always use eager-mode PyTorch. If False, will use eager mode and CUDA graph in
hybrid for maximal performance and flexibility.
Default: False
--max-context-len-to-capture

Maximum context length covered by CUDA graphs. When a sequence has context
length larger than this, we fall back to eager mode. (DEPRECATED. Use –max-seq-len-
to-capture instead)
--max-seq-len-to-capture

Maximum sequence length covered by CUDA graphs. When a sequence has context
length larger than this, we fall back to eager mode. Additionally for encoder-decoder
models, if the sequence length of the encoder input is larger than this, we fall back to
the eager mode. Ask AI
latest

Default: 8192 Skip to main content


https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 16/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

--disable-custom-all-reduce

See ParallelConfig.
Default: False
--tokenizer-pool-size

Size of tokenizer pool to use for asynchronous tokenization. If 0, will use synchronous
tokenization.
Default: 0
--tokenizer-pool-type

Type of tokenizer pool to use for asynchronous tokenization. Ignored if


tokenizer_pool_size is 0.
Default: “ray”
--tokenizer-pool-extra-config

Extra config for tokenizer pool. This should be a JSON string that will be parsed into a
dictionary. Ignored if tokenizer_pool_size is 0.
--limit-mm-per-prompt

For each multimodal plugin, limit how many input instances to allow for each prompt.
Expects a comma-separated list of items, e.g.: image=16,video=2 allows a maximum of
16 images and 2 videos per prompt. Defaults to 1 for each modality.
--mm-processor-kwargs

Overrides for the multimodal input mapping/processing,e.g., image processor. For


example: {“num_crops”: 4}.
--enable-lora

If True, enable handling of LoRA adapters.


Default: False
--max-loras

Max number of LoRAs in a single batch.


Default: 1
Ask AI
--max-lora-rank latest

Max LoRA rank. Skip to main content


https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 17/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

Default: 16
--lora-extra-vocab-size

Maximum size of extra vocabulary that can be present in a LoRA adapter (added to the
base model vocabulary).
Default: 256
--lora-dtype

Possible choices: auto, float16, bfloat16, float32


Data type for LoRA. If auto, will default to base model dtype.
Default: “auto”
--long-lora-scaling-factors

Specify multiple scaling factors (which can be different from base model scaling factor
- see eg. Long LoRA) to allow for multiple LoRA adapters trained with those scaling
factors to be used at the same time. If not specified, only adapters trained with the
base model scaling factor are allowed.
--max-cpu-loras

Maximum number of LoRAs to store in CPU memory. Must be >= than max_num_seqs.
Defaults to max_num_seqs.
--fully-sharded-loras

By default, only half of the LoRA computation is sharded with tensor parallelism.
Enabling this will use the fully sharded layers. At high sequence length, max rank or
tensor parallel size, this is likely faster.
Default: False
--enable-prompt-adapter

If True, enable handling of PromptAdapters.


Default: False
--max-prompt-adapters

Max number of PromptAdapters in a batch.


Default: 1 Ask AI
latest

--max-prompt-adapter-token
Skip to main content
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 18/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

Max number of PromptAdapters tokens


Default: 0
--device

Possible choices: auto, cuda, neuron, cpu, openvino, tpu, xpu


Device type for vLLM execution.
Default: “auto”
--num-scheduler-steps

Maximum number of forward steps per scheduler call.


Default: 1
--multi-step-stream-outputs

If False, then multi-step will stream outputs at the end of all steps
Default: True
--scheduler-delay-factor

Apply a delay (of delay factor multiplied by previous prompt latency) before scheduling
next prompt.
Default: 0.0
--enable-chunked-prefill

If set, the prefill requests can be chunked based on the max_num_batched_tokens.


--speculative-model

The name of the draft model to be used in speculative decoding.


--speculative-model-quantization

Possible choices: aqlm, awq, deepspeedfp, tpu_int8, fp8, fbgemm_fp8, modelopt,


marlin, gguf, gptq_marlin_24, gptq_marlin, awq_marlin, gptq, compressed-tensors,
bitsandbytes, qqq, experts_int8, neuron_quant, ipex, None
Method used to quantize the weights of speculative model. If None, we first check the
quantization_config attribute in the model config file. If that is None, weAskassume the
model weights are not quantized and use dtype to determine the data type of thelatest
AI

weights.
Skip to main content
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 19/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

--num-speculative-tokens

The number of speculative tokens to sample from the draft model in speculative
decoding.
--speculative-disable-mqa-scorer

If set to True, the MQA scorer will be disabled in speculative and fall back to batch
expansion
Default: False
--speculative-draft-tensor-parallel-size, -spec-draft-tp

Number of tensor parallel replicas for the draft model in speculative decoding.
--speculative-max-model-len

The maximum sequence length supported by the draft model. Sequences over this
length will skip speculation.
--speculative-disable-by-batch-size

Disable speculative decoding for new incoming requests if the number of enqueue
requests is larger than this value.
--ngram-prompt-lookup-max

Max size of window for ngram prompt lookup in speculative decoding.


--ngram-prompt-lookup-min

Min size of window for ngram prompt lookup in speculative decoding.


--spec-decoding-acceptance-method

Possible choices: rejection_sampler, typical_acceptance_sampler


Specify the acceptance method to use during draft token verification in speculative
decoding. Two types of acceptance routines are supported: 1) RejectionSampler which
does not allow changing the acceptance rate of draft tokens, 2)
TypicalAcceptanceSampler which is configurable, allowing for a higher acceptance rate
at the cost of lower quality, and vice versa.
Default: “rejection_sampler”
--typical-acceptance-sampler-posterior-threshold
Ask AI
Set the lower bound threshold for the posterior probability of a token to be accepted.
latest
This threshold is used by the TypicalAcceptanceSampler to make sampling decisions
during speculative decoding.Skip to main
Defaults content
to 0.09
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 20/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

--typical-acceptance-sampler-posterior-alpha

A scaling factor for the entropy-based threshold for token acceptance in the
TypicalAcceptanceSampler. Typically defaults to sqrt of –typical-acceptance-sampler-
posterior-threshold i.e. 0.3
--disable-logprobs-during-spec-decoding

If set to True, token log probabilities are not returned during speculative decoding. If
set to False, log probabilities are returned according to the settings in
SamplingParams. If not specified, it defaults to True. Disabling log probabilities during
speculative decoding reduces latency by skipping logprob calculation in proposal
sampling, target sampling, and after accepted tokens are determined.
--model-loader-extra-config

Extra config for model loader. This will be passed to the model loader corresponding to
the chosen load_format. This should be a JSON string that will be parsed into a
dictionary.
--ignore-patterns

The pattern(s) to ignore when loading the model.Default to ‘original/**/*’ to avoid


repeated loading of llama’s checkpoints.
Default: []
--preemption-mode

If ‘recompute’, the engine performs preemption by recomputing; If ‘swap’, the engine


performs preemption by block swapping.
--served-model-name

The model name(s) used in the API. If multiple names are provided, the server will
respond to any of the provided names. The model name in the model field of a
response will be the first name in this list. If not specified, the model name will be the
same as the –model argument. Noted that this name(s)will also be used in
model_name tag content of prometheus metrics, if multiple names provided,
metricstag will take the first one.
--qlora-adapter-name-or-path

Name or path of the QLoRA adapter.


--otlp-traces-endpoint
Ask AI
latest
Target URL to which OpenTelemetry traces will be sent.
Skip to main content
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 21/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

--collect-detailed-traces

Valid choices are model,worker,all. It makes sense to set this only if –otlp-traces-
endpoint is set. If set, it will collect detailed traces for the specified modules. This
involves use of possibly costly and or blocking operations and hence might have a
performance impact.
--disable-async-output-proc

Disable async output processing. This may result in lower performance.


Default: False
--override-neuron-config

Override or set neuron device configuration. e.g. {“cast_logits_dtype”: “bloat16”}.’


--scheduling-policy

Possible choices: fcfs, priority


The scheduling policy to use. “fcfs” (first come first served, i.e. requests are handled in
order of arrival; default) or “priority” (requests are handled based on given priority
(lower value means earlier handling) and time of arrival deciding any ties).
Default: “fcfs”
--disable-log-requests

Disable logging requests.


Default: False
--max-log-len

Max number of prompt characters or prompt ID numbers being printed in log.


Default: Unlimited
--disable-fastapi-docs

Disable FastAPI’s OpenAPI schema, Swagger UI, and ReDoc endpoint


Default: False

Ask AI
latest

Skip to main content


https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 22/27
Tool Calling in the Chat Completion API
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

Named Function Calling


vLLM supports only named function calling in the chat completion API by default. It does so
using Outlines, so this is enabled by default, and will work with any supported model. You
are guaranteed a validly-parsable function call - not a high-quality one.
To use a named function, you need to define the functions in the tools parameter of the
chat completion request, and specify the name of one of the tools in the tool_choice
parameter of the chat completion request.

Config file
The serve module can also accept arguments from a config file in yaml format. The
arguments in the yaml must be specified using the long form of the argument outlined
here:
For example:
# config.yaml

host: "127.0.0.1"
port: 6379
uvicorn-log-level: "info"

$ vllm serve SOME_MODEL --config config.yaml

NOTE
In case an argument is supplied simultaneously using command line and the config file, the
value from the commandline will take precedence. The order of priorities is command line
> config file values > defaults .

Tool calling in the chat completion API Ask AI


latest
vLLM supports only named function calling in the chat completion API. The tool_choice
options auto and required areSkip
not toyetmain
supported
contentbut on the roadmap.
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 23/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

It is the callers responsibility to prompt the model with the tool information, vLLM will not
automatically manipulate the prompt.
vLLM will use guided decoding to ensure the response matches the tool parameter object
defined by the JSON schema in the tools parameter.

Automatic Function Calling


To enable this feature, you should set the following flags:
--enable-auto-tool-choice – mandatory Auto tool choice. tells vLLM that you want
to enable the model to generate its own tool calls when it deems appropriate.
--tool-call-parser – select the tool parser to use - currently either hermes or
mistral or llama3_json or internlm . Additional tool parsers will continue to be
added in the future, and also can register your own tool parsers in the
--tool-parser-plugin .

--tool-parser-plugin – optional tool parser plugin used to register user defined


tool parsers into vllm, the registered tool parser name can be specified in
--tool-call-parser .

--chat-template – optional for auto tool choice. the path to the chat template which
handles tool -role messages and assistant -role messages that contain previously
generated tool calls. Hermes, Mistral and Llama models have tool-compatible chat
templates in their tokenizer_config.json files, but you can specify a custom
template. This argument can be set to tool_use if your model has a tool use-specific
chat template configured in the tokenizer_config.json . In this case, it will be used
per the transformers specification. More on this here from HuggingFace; and you
can find an example of this in a tokenizer_config.json here
If your favorite tool-calling model is not supported, please feel free to contribute a parser &
tool use chat template!

Hermes Models
All Nous Research Hermes-series models newer than Hermes 2 Pro should be supported.
NousResearch/Hermes-2-Pro-*

NousResearch/Hermes-2-Theta-* Ask AI
latest
NousResearch/Hermes-3-*

Skip to main content


https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 24/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

Note that the Hermes 2 Theta models are known to have degraded tool call quality &
capabilities due to the merge step in their creation.
Flags: --tool-call-parser hermes

Mistral Models
Supported models:
mistralai/Mistral-7B-Instruct-v0.3 (confirmed)
Additional mistral function-calling models are compatible as well.
Known issues:
1. Mistral 7B struggles to generate parallel tool calls correctly.
2. Mistral’s tokenizer_config.json chat template requires tool call IDs that are exactly
9 digits, which is much shorter than what vLLM generates. Since an exception is
thrown when this condition is not met, the following additional chat templates are
provided:
examples/tool_chat_template_mistral.jinja - this is the “official” Mistral chat
template, but tweaked so that it works with vLLM’s tool call IDs (provided
tool_call_id fields are truncated to the last 9 digits)

examples/tool_chat_template_mistral_parallel.jinja - this is a “better” version


that adds a tool-use system prompt when tools are provided, that results in much
better reliability when working with parallel tool calling.
Recommended flags: --tool-call-parser mistral --chat-template
examples/tool_chat_template_mistral_parallel.jinja

Llama Models
Supported models:
meta-llama/Meta-Llama-3.1-8B-Instruct

meta-llama/Meta-Llama-3.1-70B-Instruct

meta-llama/Meta-Llama-3.1-405B-Instruct

meta-llama/Meta-Llama-3.1-405B-Instruct-FP8 Ask AI
latest

The tool calling that is supported is the JSON based tool calling. Other tool calling formats
like the built in python tool callingSkip to maintool
or custom content
calling are not supported.
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 25/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM

Known issues:
1. Parallel tool calls are not supported.
2. The model can generate parameters with a wrong format, such as generating an array
serialized as string instead of an array.
The tool_chat_template_llama3_json.jinja file contains the “official” Llama chat
template, but tweaked so that it works better with vLLM.
Recommended flags: --tool-call-parser llama3_json --chat-template
examples/tool_chat_template_llama3_json.jinja

Internlm Models
Supported models:
internlm/internlm2_5-7b-chat (confirmed)
Additional internlm2.5 function-calling models are compatible as well
Known issues:
Although this implementation also supports Internlm2, the tool call results are not
stable when testing with the internlm/internlm2-chat-7b model.
Recommended flags: --tool-call-parser internlm --chat-template
examples/tool_chat_template_internlm2_tool.jinja

How to write a tool parser plugin


A tool parser plugin is a Python file containing one or more ToolParser implementations.
You can write a ToolParser similar to the Hermes2ProToolParser in
vllm/entrypoints/openai/tool_parsers/hermes_tool_parser.py.
Here is a summary of a plugin file:

# import the required packages

# define a tool parser and register it to vllm


# the name list in register_module can be used Ask AI
# in --tool-call-parser. you can define as many latest
# tool parsers as you want here.
@ToolParserManager.register_module(["example"])
class ExampleToolParser(ToolParser): Skip to main content
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 26/27
11/10/2024, 18:35 OpenAI Compatible Server — vLLM
def __init__(self, tokenizer: AnyTokenizer):
super().__init__(tokenizer)

# adjust request. e.g.: set skip special tokens


# to False for tool call output.
def adjust_request(
self, request: ChatCompletionRequest) -> ChatCompletionRequest:
return request

# implement the tool call parse for stream call


def extract_tool_calls_streaming(
self,
previous_text: str,
current_text: str,
delta_text: str,
previous_token_ids: Sequence[int],
current_token_ids: Sequence[int],
delta_token_ids: Sequence[int],
request: ChatCompletionRequest,
) -> Union[DeltaMessage, None]:
return delta

# implement the tool parse for non-stream call


def extract_tool_calls(
self,
model_output: str,
request: ChatCompletionRequest,
) -> ExtractedToolCallInformation:
return ExtractedToolCallInformation(tools_called=False,
tool_calls=[],
content=text)

Then you can use this plugin in the command line like this.
--enable-auto-tool-choice \
--tool-parser-plugin <absolute path of the plugin file>
--tool-call-parser example \
--chat-template <your chat template> \

Previous Next
Tensorize vLLM Model Deploying with Docker

Ask AI
latest

https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html 27/27

You might also like