vLLM server arguments
Server arguments for running Red Hat AI Inference Server
Abstract
Preface Copy linkLink copied to clipboard!
Red Hat AI Inference Server provides an OpenAI-compatible API server for inference serving. You can control the behavior of the server with arguments.
This document begins with a list of the most important server arguments that you use with the vllm serve
command. A complete list of vllm serve
arguments, environment variables, server metrics are also provided.
Chapter 1. Key vLLM server arguments Copy linkLink copied to clipboard!
There are 4 key arguments that you use to configure AI Inference Server to run on your hardware:
-
--tensor-parallel-size
: distributes your model across your host GPUs. -
--gpu-memory-utilization
: adjusts accelerator memory utilization for model weights, activations, and KV cache. Measured as a fraction from 0.0 to 1.0 that defaults to 0.9. For example, you can set this value to 0.8 to limit GPU memory consumption by AI Inference Server to 80%. Use the largest value that is stable for your deployment to maximize throughput. -
--max-model-len
: limits the maximum context length of the model, measured in tokens. Set this to prevent problems with memory if the model’s default context length is too long. -
--max-num-batched-tokens
: limits the maximum batch size of tokens to process per step, measured in tokens. Increasing this improves throughput but can affect output token latency.
For example, to run the Red Hat AI Inference Server container and serve a model with vLLM, run the following, changing server arguments as required:
Chapter 2. Complete list of vLLM server arguments Copy linkLink copied to clipboard!
The following is a comprehensive list of the vLLM server arguments that you can use with the vllm serve
command. An explanation of each server argument and default values is provided.
2.1. vLLM server arguments Copy linkLink copied to clipboard!
- --model
Name or path of the Hugging Face model to use.
Default value:
facebook/opt-125m
- --task
The task to use the model for. Each AI Inference Server instance only supports one task, even if the same model can be used for multiple tasks. When the model only supports one task,
auto
can be used to select it; otherwise, you must specify explicitly which task to use.Default value:
auto
Options:
auto
,generate
,embedding
,embed
,classify
,score
,reward
,transcription
- --tokenizer
- Name or path of the Hugging Face tokenizer to use. If unspecified, model name or path is used.
- --hf-config-path
- Name or path of the Hugging Face config to use. If unspecified, model name or path is used.
- --skip-tokenizer-init
-
Skip initialization of tokenizer and detokenizer. Expects valid
prompt_token_ids
and None for prompt from the input. The generated output will contain token ids. - --revision
- The specific model version to use. It can be a branch name, a tag name, or a commit id. If unspecified, uses the default version.
- --code-revision
- The specific revision to use for the model code on Hugging Face Hub. It can be a branch name, a tag name, or a commit id. If unspecified, uses the default version.
- --tokenizer-revision
- Revision of the Hugging Face tokenizer to use. It can be a branch name, a tag name, or a commit id. If unspecified, uses the default version.
- --tokenizer-mode
The tokenizer mode.
-
auto
uses the fast tokenizer if available. -
slow
always use the slow tokenizer. -
mistral
always use the mistral_common tokenizer. -
custom
use--tokenizer
to select the preregistered tokenizer.
Default value:
auto
Options:
auto
,slow
,mistral
,custom
-
- --trust-remote-code
- Trust remote code from Hugging Face.
- --allowed-local-media-path
- Allowing API requests to read local images or videos from directories specified by the server file system. This is a security risk. Should only be enabled in trusted environments.
- --download-dir
- Directory to download and load the weights, default to the default cache dir of Hugging Face.
- --load-format
The format of the model weights to load.
Default value:
auto
Options:
auto
,pt
,safetensors
,npcache
,dummy
,tensorizer
,sharded_state
,gguf
,bitsandbytes
,mistral
,runai_streamer
-
auto
trys to load the weights in the safetensors format and fall back to the pytorch bin format if safetensors format is not available. -
pt
loads the weights in the pytorch bin format. -
safetensors
loads the weights in the safetensors format. -
npcache
loads the weights in pytorch format and store a numpy cache to speed up the loading. -
dummy
initializes the weights with random values, which is mainly for profiling. -
tensorizer
loads the weights using tensorizer from CoreWeave. See the Tensorize AI Inference Server Model script in the Examples section for more information. -
runai_streamer
loads the Safetensors weights using Run:aiModel Streamer -
bitsandbytes
loads the weights using bitsandbytes quantization.
-
- --config-format
The format of the model config to load.
Options:
auto
,hf
,mistral
auto
trys to load the config in hf format if available, if it is not, trys to load in mistral format.Default value:
ConfigFormat.AUTO
- --dtype
Data type for model weights and activations.
Default value:
auto
Options:
auto
,half
,float16
,bfloat16
,float
,float32
-
auto
uses FP16 precision for FP32 and FP16 models, and BF16 precision for BF16 models. -
half
for FP16. Recommended for AWQ quantization. -
float16
is the same ashalf
. -
bfloat16
for a balance between precision and range. -
float
is shorthand for FP32 precision. -
float32
for FP32 precision.
-
- --kv-cache-dtype
Data type for kv cache storage. If
auto
, uses model data type. CUDA 11.8+ supportsfp8
(=fp8_e4m3
) andfp8_e5m2
. ROCm (AMD GPU) supportsfp8
(=fp8_e4m3
)Options:
auto
,fp8
,fp8_e5m2
,fp8_e4m3
Default value:
auto
- --max-model-len
- Model context length. If unspecified, value is automatically derived from the model config.
- --guided-decoding-backend
Which engine to use for guided decoding (JSON schema, regex, and so on) by default. Currently support
outlines-dev/outlines
,mlc-ai/xgrammar
, andnoamgat/lm-format-enforcer
. Can be overridden per request viaguided_decoding_backend
parameter. Backend-specific options can be supplied in a comma-separated list following a colon after the backend name. Valid backends and all available options are:-
xgrammar:no-fallback
, -
xgrammar:disable-any-whitespace
, -
outlines:no-fallback
, -
lm-format-enforcer:no-fallback
Default value:
xgrammar
-
- --logits-processor-pattern
- Optional regex pattern specifying valid logits processor qualified names that can be passed with the logits_processors extra completion argument. Defaults to None, which allows no processors.
- --model-impl
Which implementation of the model to use.
Default value:
auto
Options:
auto
,vllm
,transformers
-
auto
trys to use the AI Inference Server implementation if it exists and fall back to the Transformers implementation if no AI Inference Server implementation is available. -
vllm
uses the AI Inference Server model implementation. -
transformers
uses the Transformers model implementation.
-
- --distributed-executor-backend
Backend to use for distributed model workers, either
ray
ormp
(multiprocessing). If the product ofpipeline_parallel_size
andtensor_parallel_size
is less than or equal to the number of GPUs available,mp
is used to keep processing on a single host. Otherwise, defaults toray
if Ray is installed and fail otherwise. Note that TPU only supports Ray for distributed inference.Options:
ray
,mp
,uni
,external_launcher
- --pipeline-parallel-size, -pp
Number of nodes to split the model across by dividing model layers into sequential pipeline stages.
Default value: 1
- --tensor-parallel-size, -tp
Split the model across multiple GPUs to share storage and computation load.
Default value: 1
- --enable-expert-parallel
- Use expert parallelism instead of tensor parallelism for MoE layers.
- --max-parallel-loading-workers
- Load model sequentially in multiple batches, to avoid RAM OOM when using tensor parallel and large models.
- --ray-workers-use-nsight
-
If specified, use
nsight
to profile Ray workers. - --block-size
Token block size for contiguous chunks of tokens. This is ignored on neuron devices and set to --max-model-len. On CUDA devices, only block sizes up to 32 are supported. On HPU devices, block size defaults to 128.
Options: 8, 16, 32, 64, 128
- --enable-prefix-caching, --no-enable-prefix-caching
-
Enables automatic prefix caching. Use
--no-enable-prefix-caching
to disable explicitly. - --disable-sliding-window
- Disables sliding window, capping to sliding window size.
- --use-v2-block-manager
-
DEPRECATED: block manager v1 has been removed and
SelfAttnBlockSpaceManager
(block manager v2) is now the default. Setting this flag to True or False has no effect on AI Inference Server behavior. - --num-lookahead-slots
Experimental scheduling config necessary for speculative decoding. This is replaced by speculative config ins the future; it. is present to enable correctness tests until then.
Default value: 0
- --seed
- Random seed for operations.
- --swap-space
CPU swap space size (GiB) per GPU.
Default value: 4
- --cpu-offload-gb
The space in GiB to offload to CPU, per GPU. Default is 0, which means no offloading. Intuitively, this argument can be seen as a virtual way to increase the GPU memory size. For example, if you have one 24 GB GPU and set this to 10, virtually you can think of it as a 34 GB GPU. Then you can load a 13B model with BF16 weight, which requires at least 26GB GPU memory. Note that this requires fast CPU-GPU interconnect, as part of the model is loaded from CPU memory to GPU memory dynamically in each model forward pass.
Default value: 0
- --gpu-memory-utilization
The fraction of GPU memory to be used for the model executor, which can range from 0 to 1. For example, a value of 0.5 would imply 50% GPU memory utilization. If unspecified, uses the default value of 0.9. This is a per-instance limit, and only applies to the current AI Inference Server instance. It does not matter if you have another AI Inference Server instance running on the same GPU. For example, if you have two AI Inference Server instances running on the same GPU, you can set the GPU memory utilization to 0.5 for each instance.
Default value: 0.9
- --num-gpu-blocks-override
- If specified, ignore GPU profiling result and use this number of GPU blocks. Used for testing preemption.
- --max-num-batched-tokens
- Maximum number of batched tokens per iteration. In vLLM, a batch is the set of all tokens from active sequences that are jointly fed into the model at each scheduler step. It is measured as "tokens per iteration" rather than "sequences per iteration".
- --max-num-partial-prefills
For chunked prefill, the max number of concurrent partial prefills.Defaults to 1
Default value: 1
- --max-long-partial-prefills
For chunked prefill, the maximum number of prompts longer than --long-prefill-token-threshold that is prefilled concurrently. Setting thiss less than --max-num.-partial-prefills will allow shorter prompts to jump the queue in front of longer prompts in some cases, improving latency. Defaults to 1.
Default value: 1
- --long-prefill-token-threshold
For chunked prefill, a request is considered long if the prompt is longer than this number of tokens. Defaults to 4% of the model’s context length.
- Default value: 0
- --max-num-seqs
- Maximum number of sequences per iteration.
- --max-logprobs
Max number of log probs to return logprobs is specified in
SamplingParams
.Default value: 20
- --disable-log-stats
- Disable logging statistics.
- --quantization, -q
Method used to quantize the weights. If None, first check the quantization_config attribute in the model config file. If that is None, assume the model weights are not quantized and use dtype to determine the data type of the weights.
Options:
aqlm
,awq
,deepspeedfp
,tpu_int8
,fp8
,ptpc_fp8
,fbgemm_fp8
,modelopt
,nvfp4
,marlin
,gguf
,gptq_marlin_24
,gptq_marlin
,awq_marlin
,gptq
,compressed-tensors
,bitsandbytes
,qqq
,hqq
,experts_int8
,neuron_quant
,ipex
,quark
,moe_wna16
,None
- --rope-scaling
-
RoPE scaling configuration in JSON format. For example, {
rope_type
:`dynamic`,factor
:2.0} - --rope-theta
- RoPE theta. Use with rope_scaling. In some cases, changing the RoPE theta improves the performance of the scaled model.
- --hf-overrides
- Extra arguments for the HuggingFace config. This should be a JSON string that is parsed into a dictionarys.
- --enforce.-eager
- Always use eager-mode PyTorch. If False, uses eager mode and CUDA graph in hybrid for maximal performance and flexibility.
- --max-seq-len-to-capture
Maximum sequence length covered by CUDA graphs. When a sequence has context length larger than this, AI Inference Server falls back to eager mode. Additionally for encoder-decoder models, if the sequence length of the encoder input is larger than this, AI Inference Server falls back to the eager mode.
Default value: 8192
- --disable-custom-all-reduce
-
See
ParallelConfig
. - --tokenizer-pool-size
Size of tokenizer pool to use for asynchronous tokenization. If 0, uses synchronous tokenization.
Default value: 0
- --tokenizer-pool-type
Type of tokenizer pool to use for asynchronous tokenization. Ignored if tokenizer_pool_size is 0.
Default value:
ray
- --tokenizer-pool-extra-config
- Extra config for tokenizer pool. This should be a JSON string that is parsed into a dictionarys. Ignored if tokenizer_pool_size. is 0.
- --limit-mm-per-prompt
- For each multimodal plugin, limit how many input instances to allow for each prompt. Expects a comma-separated list of items, e.g.: image=16,video=2 allows a maximum of 16 images and 2 videos per prompt. Defaults to 1 for each modality.
- --mm-processor-kwargs
-
Overrides for the multimodal input mapping and processing, e.g., image processor. For example:
{num_crops: 4}
. - --disable-mm-preprocessor-cache
- If true, then disables caching of the multi-modal preprocessor and mapper. (not recommended)
- --enable-lora
- If True, enable handling of LoRA adapters.
- --enable-lora-bias
- If True, enable bias for LoRA adapters.
- --max-loras
Max number of LoRAs in a single batch.
Default value: 1
- --max-lora-rank
Max LoRA rank.
Default value: 16
- --lora-extra-vocab-size
Maximum size of extra vocabulary that can be present in a LoRA adapter (added to the base model vocabulary).
Default value: 256
- --lora-dtype
Data type for LoRA. If auto, will default to base model dtype.
Default value:
auto
Options:
auto
,float16
,bfloat16
- --long-lora-scaling-factors
- Specify multiple scaling factors (which can be different from base model scaling factor - see eg. Long LoRA) to allow for multiple LoRA adapters trained with those scaling factors to be used at the same time. If not specified, only adapters trained with the base model scaling factor are allowed.
- --max-cpu-loras
-
Maximum number of LoRAs to store in CPU memory. Must be greater than
max_loras
. Defaults tomax_loras
. - --fully-sharded-loras
- By default, only half of the LoRA computation is sharded with tensor parallelism. Enabling this uses the fully sharded layers. At high sequence length, max rank or tensor parallel size, this is likely faster.
- --enable-prompt-adapter
-
If True, enable handling of
PromptAdapters
. - --max-prompt-adapters
Max number of
PromptAdapters
in a batch.Default value: 1
- --max-prompt-adapter-token
Max number of
PromptAdapters
tokensDefault value: 0
- --device
Device type for AI Inference Server execution.
Options:
auto
,cuda
,neuron
,cpu
,openvino
,tpu
,xpu
,hpu
Default value:
auto
- --num-scheduler-steps
Maximum number of forward steps per scheduler call.
Default value: 1
- --use-tqdm-on-load, --no-use-tqdm-on-load
Whether to enable or disable progress bar when loading model weights.
Default value: True
- --multi-step-stream-outputs
If False, then multi-step will stream outputs at the end of all steps
Default value: True
- --scheduler-delay-factor
Apply a delay (of delay factor multiplied by previous prompt latency) before scheduling next prompt.
Default value: 0.0
- --enable-chunked-prefill
-
If set, the prefill requests can be chunked based on the
max_num_batched_tokens
. - --speculative-model
- The name of the draft model to be used in speculative decoding.
- --speculative-model-quantization
Method used to quantize the weights of speculative model. If None, AI Inference Server first checks the
quantization_config
attribute in the model config file. If that is None, AI Inference Server assumes the model weights are not quantized and use dtype to determine the data type of the weights.Options:
aqlm
,awq
,deepspeedfp
,tpu_int8
,fp8
,ptpc_fp8
,fbgemm_fp8
,modelopt
,nvfp4
,marlin
,gguf
,gptq_marlin_24
,gptq_marlin
,awq_marlin
,gptq
,compressed-tensors
,bitsandbytes
,qqq
,hqq
,experts_int8
,neuron_quant
,ipex
,quark
,moe_wna16
,None
- --num-speculative-tokens
- The number of speculative tokens to sample from the draft model in speculative decoding.
- --speculative-disable-mqa-scorer
- If set to True, the MQA scorer is disabled in speculative and falls back to batch expansion.
- --speculative-draft-tensor-parallel-size, -spec-draft-tp
- Number of tensor parallel replicas for the draft model in speculative decoding.
- --speculative-max-model-len
- The maximum sequence length supported by the draft model. Sequences over this length will skip speculation.
- --speculative-disable-by-batch-size
- Disable speculative decoding for new incoming requests if the number of enqueue requests is larger than this value.
- --ngram-prompt-lookup-max
- Max size of window for ngram prompt lookup in speculative decoding.
- --ngram-prompt-lookup-min
- Minimum size of window for ngram prompt lookup in speculative decoding.
- --spec-decoding-acceptance-method
Specify the acceptance method to use during draft token verification in speculative decoding. Two types of acceptance routines are supported:
-
RejectionSampler
: Does not allow changing the acceptance rate of draft tokens, TypicalAcceptanceSampler
: Configurable, allows for a higher acceptance rate at the cost of lower quality, and vice versa.Default value:
rejection_sampler
Options:
rejection_sampler
,typical_acceptance_sampler
-
- --typical-acceptance-sampler-posterior-threshold
-
Set the lower bound threshold for the posterior probability of a token to be accepted. This threshold is used by the
TypicalAcceptanceSampler
to make sampling decisions during speculative decoding. Defaults to 0.09. - --typical-acceptance-sampler-posterior-alpha
-
A scaling factor for the entropy-based threshold for token acceptance in the
TypicalAcceptanceSampler
. Typically defaults to square root of--typical-acceptance-sampler-posterior-threshold
, for example, 0.3. - --disable-logprobs-during-spec-decoding
-
If set to True, token log probabilities are not returned during speculative decoding. If set to False, log probabilities are returned according to the settings in
SamplingParams
. If not specified, it defaults to True. Disabling log probabilities during speculative decoding reduces latency by skippinglogprob
calculation in proposal sampling, target sampling, and after accepted tokens are determined. - --model-loader-extra-config
-
Extra config for model loader. This is passed to the model loaders corresponding to the chosen.
load_format
. This should be a JSON string that is parsed into a dictionarys. - --ignore.-patterns
The pattern(s) to ignore when loading the model. Defaults to
original/**/*
to avoid repeated loading of llama’s checkpoints.Default value: []
- --preemption-mode
-
If
recompute
, the engine performs preemption by recomputing; Ifswap
, the engine performs preemption by block swapping. - --served-model-name
-
The model name(s) used in the API. If multiple names are provided, the server will respond to any of the provided names. The model name in the model field of a response is the first name in this list. If not specified, the model name is the same as the
--models
argument. Note. that the name(s) is also be used inmodel_name
tag content for Prometheus metrics. If multiple names are provided, metrics tag will take the first one. - --qlora-adapter-name-or-path
- Name or path of the QLoRA adapter.
- --show-hidden-metrics-for-version
-
Enable deprecated Prometheus metrics that have been hidden since the specified version. For example, if a previously deprecated metric has been hidden since the v0.7.0 release, you use
--show-hidden-metrics-for-version=0.7
as a temporary escape hatch while you migrate to new metrics. The metric is likely to be removed completely in an upcoming release. - --otlp-traces-endpoint
- Target URL to which OpenTelemetry traces is sent.
- --collects-detailed-traces
-
Valid choices are
model
,worker
,all
. It makes sense to set this only if--otlp-traces-endpoint
is set. If set, server collects detailed traces for the specified modules. This involves use of possibly costly and or blocking operations and hence might have a performance impact. - --disable-async-output-proc
- Disable async output processing. This may result in lower performance.
- --scheduling-policy
The scheduling policy to use.
fcfs
(first come first served, requests are handled in order of arrival; default) orpriority
(requests are handled based on given priority, lower value means earlier handling; and time of arrival deciding any ties).Default value:
fcfs
Options:
fcfs
,priority
- --scheduler-cls
The scheduler class to use.
vllm.core.scheduler.Scheduler
is the default scheduler. Can be a class directly or the path to a class of formmod.custom_class
.Default value:
vllm.core.scheduler.Scheduler
- --override-neuron-config
-
Override or set neuron device configuration, for example,
{cast_logits_dtype: bloat16}
. - --override-pooler-config
-
Override or set the pooling method for pooling models, for example, {
pooling_type
:mean
,normalize
: false}. - --compilation-config, -O
-
torch.compile
configuration for the model. When it is a number (0, 1, 2, 3), it is interpreted as the optimizations level. NOTE:. level 0 is the default level without any optimization. level 1 and 2 are for internal testing only. level 3 is the recommended level for production. To specify the full compilation config, use a JSON string. Following the convention of traditional compilers, using -O without space is also supported. -O3 is equivalent to -O 3. - --kv-transfer-config
- The configurations for distributed KV cache transfer. Should be a JSON string.
- --worker-cls
The worker class to use for distributed execution.
Default value:
auto
- --worker-extension-cls
- The worker extension class on top of the worker cls, it is useful if you just want to add new functions to the worker class without changing the existing functions.
- --generation-config
The folder path to the generation config. Defaults to
auto
, the generation config is loaded from model paths. If set to.vllm
, no generation config is loaded, AI Inference Server defaults is used. If set to a folder path, the generation config is loaded from the specified folders path. Ifmax_new_tokens.
is specified in generation config, then it sets a server-wide limit on the number of output tokens for all requests.Default value:
auto
- --override-generation-config
-
Overrides or sets generation config in JSON format, for example,
{temperature: 0.5}
. If used with--generation-config=auto
, the override parameters is merged with the default configs from the model. If generation-config is None, only the override parameters are used. - --enable-sleep-mode
- Enable sleep mode for the engine. Only supported for CUDA platform.
- --calculate-kv-scales
-
This enables dynamic calculation of
k_scale
andv_scale
whenkv-cache-dtype
isfp8
. Ifcalculate-kv-scales
is false, the scales are loaded from the model checkpoint if available. Otherwise, the scales default to 1.0. - --additional-config
-
Additional config for specified platform in JSON format. Different platforms may support different configs. Make sure the configs are valid for the platform you are using. The input format is like
{<config_key>: <config_value>}
- --enable-reasoning
-
Whether to enable
reasoning_content
for the model. If enabled, the model is able to generate reasonings content. - .--reasoning-parser
Select the reasoning parser depending on the model that you are using. This is used to parse the reasoning content into OpenAI API format. Required for
--enable-reasoning
.Options:
deepseek_r1
- --chat-template
- Pass a Jinja2 template that specifies how are roles, messages, and other chat-specific tokens are encoded in the input. For more information, see Chat Template.
- --tool-call-parser
-
Options:
deepseek_v3
,granite-20b-fc
,granite
,hermes
,internlm
,jamba
,llama4_json
,llama3_json
,mistral
,phi4_mini_json
,pythonic
, or name registered in--tool-parser-plugin
. - --cuda-graph-sizes
CUDA graph capture sizes, default is 512. If one value is provided, then the capture list would follow the pattern:
[1, 2, 4] + [i for i in range(8, cuda_graph_sizes + 1, 8)]
more than one value (e.g. 1 2 128) is provided, then the capture list will follow the provided list.Default: 512
- --data-parallel-address, -dpa
- Address of the data parallel cluster head-node.
- --data-parallel-rpc-port, -dpp
- Port for data parallel RPC communication.
- --data-parallel-size, -dp
Number of data parallel groups. MoE layers are sharded according to the product of the tensor parallel size and data parallel size.
Default: 1
- --data-parallel-size-local, -dpl
- Number of data parallel replicas to run on this node.
- --disable-cascade-attn, --no-disable-cascade-attn
Disable cascade attention for V1. While cascade attention does not change the mathematical correctness, disabling it could be useful for preventing potential numerical issues. Note that even if this is set to False, cascade attention is only used when the heuristics tells that it is beneficial.
Default: False
- --disable-chunked-mm-input, --no-disable-chunked-mm-input
If set to true and chunked prefill is enabled, do not partially schedule a multimodal item. Only used in V1. This ensures that if a request has a mixed prompt (for example, text tokens TTTT followed by image tokens IIIIIIIIII) where only some image tokens can be scheduled (for example, TTTTIIIII, leaving IIIII), The item is scheduled as TTTT in one step and IIIIIIIIII in the next.
Default: False
- --enable-prompt-embeds, --no-enable-prompt-embeds
If True, enables passing text embeddings as inputs via the prompt_embeds key. Note that enabling this will double the time required for graph compilation.
Default: False
- --enable-prompt-embeds, --no-enable-prompt-embeds
If True, enables passing text embeddings as inputs via the prompt_embeds key. Note that enabling this will double the time required for graph compilation.
Default: False
- --guided-decoding-disable-additional-properties, --no-guided-decoding-disable-additional-properties
If True, the guidance backend will not use
additionalProperties
in the JSON schema. This is only supported for the guidance backend and is used to better align its behaviour with outlines andxgrammar
.Default: False
- --guided-decoding-disable-any-whitespace, ::--no-guided-decoding-disable-any-whitespace
If True, the model will not generate any whitespace during guided decoding. This is only supported for xgrammar and guidance backends.
Default: False
- --guided-decoding-disable-fallback, --no-guided-decoding-disable-fallback
If True, vLLM will not fallback to a different backend on error.
Default: False
- --hf-token
-
The token to use as HTTP bearer authorization for remote files . If True, uses the token generated when running
huggingface-cli
login, stored in~/.huggingface
. - --kv-events-config
- The configuration for event publishing. Should either be a valid JSON string or JSON keys passed individually.
- --prefix-caching-hash-algo
Set the hash algorithm for prefix caching:
Options:
builtin
,sha256
-
builtin
is Python’s built-in hash. -
sha256
is collision resistant but with certain overheads.
Default:
builtin
-
- --pt-load-map-location
Map location for loading pytorch checkpoint, to support loading checkpoints can only be loaded on certain devices like
cuda
, this is equivalent to{": "cuda"}
. Another supported format is mapping from different devices like from GPU 1 to GPU 0:{"cuda:1": "cuda:0"}
. Note that when passed from command line, the strings in dictionary needs to be double quoted for json parsing. For more details, see original doc formap_location
in https://pytorch.org/docs/stable/generated/torch.load.htmlDefault:
cpu
- --speculative-config
- The configurations for speculative decoding. Should be a JSON string.
- --ssl-keyfile
- Location of your TLS private key in PEM format.
2.2. Async engine arguments Copy linkLink copied to clipboard!
usage: vllm serve [-h] [--disable-log-requests]
usage: vllm serve [-h] [--disable-log-requests]
- --disable-log-requests
- Disable logging requests.
Chapter 3. vLLM server usage Copy linkLink copied to clipboard!
Chapter 4. Environment variables Copy linkLink copied to clipboard!
You can use environment variables to configure the system-level installation, build, logging behavior of AI Inference Server.
VLLM_PORT
and VLLM_HOST_IP
set the host ports and IP address for internal usage of AI Inference Server. It is not the port and IP address for the API server. Do not use --host $VLLM_HOST_IP
and --port $VLLM_PORT
to start the API server.
All environment variables used by AI Inference Server are prefixed with VLLM_
. If you are using Kubernetes, do not name the service vllm
, otherwise environment variables set by Kubernetes might come into conflict with AI Inference Server environment variables. This is because Kubernetes sets environment variables for each service with the capitalized service name as the prefix. For more information, see Kubernetes environment variables.
Environment variable | Description |
---|---|
|
Target device of vLLM, supporting |
| Maximum number of compilation jobs to run in parallel. By default, this is the number of CPUs. |
|
Number of threads to use for nvcc. By default, this is 1. If set, |
| If set, AI Inference Server uses precompiled binaries (\*.so). |
| Whether to force using nightly wheel in Python build for testing. |
| CMake build type. Available options: "Debug", "Release", "RelWithDebInfo". |
| If set, AI Inference Server prints verbose logs during installation. |
| Root directory for AI Inference Server configuration files. |
| Root directory for AI Inference Server cache files. |
| Used in a distributed environment to determine the IP address of the current node. |
| Used in a distributed environment to manually set the communication port. |
| Path used for IPC when the frontend API server is running in multi-processing mode. |
| If true, will load models from ModelScope instead of Hugging Face Hub. |
| Interval in seconds to log a warning message when the ring buffer is full. |
| Path to cudatoolkit home directory, under which should be bin, include, and lib directories. |
| Path to the NCCL library file. Needed for versions of NCCL >= 2.19 due to a bug in PyTorch. |
|
Used when |
| Flag to control if you wantAI Inference Server to use Triton Flash Attention. |
| Force AI Inference Server to use a specific flash-attention version (2 or 3), only valid with the flash-attention backend. |
| Internal flag to enable Dynamo fullgraph capture. |
| Local rank of the process in the distributed setting, used to determine the GPU device ID. |
| Used to control the visible devices in a distributed setting. |
| Timeout for each iteration in the engine. |
| API key for AI Inference Server API server. |
| S3 access key ID for tensorizer to load model from S3. |
| S3 secret access key for tensorizer to load model from S3. |
| S3 endpoint URL for tensorizer to load model from S3. |
| URL for AI Inference Server usage stats server. |
| If true, disables collection of usage stats. |
| If true, disables tracking of AI Inference Server usage stats. |
| Source for usage stats collection. |
| If set to 1, AI Inference Server configures logging using the default configuration or the specified config path. |
| Path to the logging configuration file. |
| Default logging level for vLLM. |
| If set, AI Inference Server prepends this prefix to all log messages. |
| Number of threads used for custom logits processors. |
| If set to 1, AI Inference Server traces function calls for debugging. |
| Backend for attention computation, for example , "TORCH_SDPA", "FLASH_ATTN", "XFORMERS"). |
| If set, AI Inference Server uses the FlashInfer sampler. |
| Forces FlashInfer to use tensor cores; otherwise uses heuristics. |
| Pipeline stage partition strategy. |
| CPU key-value cache space (default is 4GB). |
| CPU core IDs bound by OpenMP threads. |
| Whether to use prepack for MoE layer on unsupported CPUs. |
| OpenVINO device selection (default is CPU). |
| OpenVINO key-value cache space (default is 4GB). |
| Precision for OpenVINO KV cache. |
| Enables weights compression during model export by using HF Optimum. |
| Enables Ray SPMD worker for execution on all workers. |
| Uses the Compiled Graph API provided by Ray to optimize control plane overhead. |
| Enables NCCL communication in the Compiled Graph provided by Ray. |
| Enables GPU communication overlap in the Compiled Graph provided by Ray. |
| Specifies the method for multiprocess workers, for example, "fork"). |
| Path to the cache for storing downloaded assets. |
| Timeout for fetching images when serving multimodal models (default is 5 seconds). |
| Timeout for fetching videos when serving multimodal models (default is 30 seconds). |
| Timeout for fetching audio when serving multimodal models (default is 10 seconds). |
| Cache size in GiB for multimodal input cache (default is 8GiB). |
| Path to the XLA persistent cache directory (only for XLA devices). |
| If set, asserts on XLA recompilation after each execution step. |
| Chunk size for fused MoE layer (default is 32768). |
| If true, skips deprecation warnings. |
| If true, keeps the OpenAI API server alive even after engine errors. |
| Allows specifying a max sequence length greater than the default length of the model. |
| Forces FP8 Marlin for FP8 quantization regardless of hardware support. |
| Forces a specific load format. |
| Timeout for fetching response from backend server. |
| List of plugins to load. |
| Directory for saving Torch profiler traces. |
| If set, uses Triton implementations of AWQ. |
| If set, allows updating Lora adapters at runtime. |
| Skips peer-to-peer capability check. |
| List of quantization kernels to disable for performance comparisons. |
| If set, uses V1 code path. |
| Pads FP8 weights to 256 bytes for ROCm. |
| Divisor for dynamic query scale factor calculation for FP8 KV Cache. |
| Divisor for dynamic key scale factor calculation for FP8 KV Cache. |
| Divisor for dynamic value scale factor calculation for FP8 KV Cache. |
| If set, enables multiprocessing in LLM for the V1 code path. |
| Time interval for logging batch size. |
|
If set, AI Inference Server runs in development mode, enabling additional endpoints for debugging, for example |
VLLM_V1_OUTPUT_PROC_CHUNK_SIZE | Controls the maximum number of requests to handle in a single asyncio task for processing per-token outputs in the V1 AsyncLLM interface. It affects high-concurrency streaming requests. |
| If set, AI Inference Server disables the MLA attention optimizations. |
|
If set, AI Inference Server uses the Triton implementation of |
| Number of GPUs per worker in Ray. Can be a fraction to allow Ray to schedule multiple actors on a single GPU. |
| Specifies the indices used for the Ray bundle, for each worker. Format: comma-separated list of integers (e.g., "0,1,2,3"). |
|
Specifies the path for the |
| Enables contiguous cache fetching to avoid costly gather operations on Gaudi3. Only applicable to HPU contiguous cache. |
| Rank of the process in the data parallel setting. |
| World size of the data parallel setting. |
| IP address of the master node in the data parallel setting. |
| Port of the master node in the data parallel setting. |
| Whether to use the S3 path for model loading in CI by using RunAI Streamer. |
| Whether to use atomicAdd reduce in gptq/awq marlin kernel. |
| Whether to turn on the outlines cache for V0. This cache is unbounded and on disk, so it is unsafe for environments with malicious users. |
| If set, disables TPU-specific optimization for top-k & top-p sampling. |
Chapter 5. Viewing AI Inference Server metrics Copy linkLink copied to clipboard!
vLLM exposes various metrics via the /metrics
endpoint on the AI Inference Server OpenAI-compatible API server.
You can start the server by using Python, or using Docker.
Procedure
Launch the AI Inference Server server and load your model as shown in the following example. The command also exposes the OpenAI-compatible API.
vllm serve unsloth/Llama-3.2-1B-Instruct
$ vllm serve unsloth/Llama-3.2-1B-Instruct
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Query the
/metrics
endpoint of the OpenAI-compatible API to get the latest metrics from the server:curl http://0.0.0.0:8000/metrics
$ curl http://0.0.0.0:8000/metrics
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 6. AI Inference Server metrics Copy linkLink copied to clipboard!
AI Inference Server exposes vLLM metrics that you can use to monitor the health of the system.
Metric Name | Description |
---|---|
| Number of requests currently running on GPU. |
| Number of requests waiting to be processed. |
| Running stats on LoRA requests. |
| Number of requests swapped to CPU. Deprecated: KV cache offloading is not used in V1. |
| GPU KV-cache usage. A value of 1 means 100% usage. |
| CPU KV-cache usage. A value of 1 means 100% usage. Deprecated: KV cache offloading is not used in V1. |
| CPU prefix cache block hit rate. Deprecated: KV cache offloading is not used in V1. |
|
GPU prefix cache block hit rate. Deprecated: Use |
| Cumulative number of preemptions from the engine. |
| Total number of prefill tokens processed. |
| Total number of generation tokens processed. |
| Histogram of the number of tokens per engine step. |
| Histogram of time to the first token in seconds. |
| Histogram of time per output token in seconds. |
| Histogram of end-to-end request latency in seconds. |
| Histogram of time spent in the WAITING phase for a request. |
| Histogram of time spent in the RUNNING phase for a request. |
| Histogram of time spent in the PREFILL phase for a request. |
| Histogram of time spent in the DECODE phase for a request. |
|
Histogram of time the request spent in the queue in seconds. Deprecated: Use |
| Histogram of time spent in the model forward pass in milliseconds. Deprecated: Use prefill/decode/inference time metrics instead. |
| Histogram of time spent in the model execute function in milliseconds. Deprecated: Use prefill/decode/inference time metrics instead. |
| Histogram of the number of prefill tokens processed. |
| Histogram of the number of generation tokens processed. |
| Histogram of the maximum number of requested generation tokens. |
|
Histogram of the |
|
Histogram of the |
| Count of successfully processed requests. |
| Speculative token acceptance rate. |
| Speculative decoding system efficiency. |
| Total number of accepted tokens. |
| Total number of draft tokens. |
| Total number of emitted tokens. |
Chapter 7. Deprecated metrics Copy linkLink copied to clipboard!
The following metrics are deprecated and will be removed in a future version of AI Inference Server:
-
vllm:num_requests_swapped
-
vllm:cpu_cache_usage_perc
-
vllm:cpu_prefix_cache_hit_rate
(KV cache offloading is not used in V1). -
vllm:gpu_prefix_cache_hit_rate
. This metric is replaced by queries+hits counters in V1. -
vllm:time_in_queue_requests
. This metric is duplicated byvllm:request_queue_time_seconds
. -
vllm:model_forward_time_milliseconds
-
vllm:model_execute_time_milliseconds
. Prefill, decode or inference time metrics should be used instead.
When metrics are deprecated in version X.Y
, they are hidden in version X.Y+1
but can be re-enabled by using the --show-hidden-metrics-for-version=X.Y
escape hatch. Deprecated metrics are completely removed in the following version X.Y+2
.