Chapter 2. Complete list of vLLM server arguments


The following is a comprehensive list of the vLLM server arguments that you can use with the vllm serve command. An explanation of each server argument and default values is provided.

2.1. vLLM server arguments

--model

Name or path of the Hugging Face model to use.

Default value: facebook/opt-125m

--task

The task to use the model for. Each AI Inference Server instance only supports one task, even if the same model can be used for multiple tasks. When the model only supports one task, auto can be used to select it; otherwise, you must specify explicitly which task to use.

Default value: auto

Options: auto, generate, embedding, embed, classify, score, reward, transcription

--tokenizer
Name or path of the Hugging Face tokenizer to use. If unspecified, model name or path is used.
--hf-config-path
Name or path of the Hugging Face config to use. If unspecified, model name or path is used.
--skip-tokenizer-init
Skip initialization of tokenizer and detokenizer. Expects valid prompt_token_ids and None for prompt from the input. The generated output will contain token ids.
--revision
The specific model version to use. It can be a branch name, a tag name, or a commit id. If unspecified, uses the default version.
--code-revision
The specific revision to use for the model code on Hugging Face Hub. It can be a branch name, a tag name, or a commit id. If unspecified, uses the default version.
--tokenizer-revision
Revision of the Hugging Face tokenizer to use. It can be a branch name, a tag name, or a commit id. If unspecified, uses the default version.
--tokenizer-mode

The tokenizer mode.

  • auto uses the fast tokenizer if available.
  • slow always use the slow tokenizer.
  • mistral always use the mistral_common tokenizer.
  • custom use --tokenizer to select the preregistered tokenizer.

Default value: auto

Options: auto, slow, mistral, custom

--trust-remote-code
Trust remote code from Hugging Face.
--allowed-local-media-path
Allowing API requests to read local images or videos from directories specified by the server file system. This is a security risk. Should only be enabled in trusted environments.
--download-dir
Directory to download and load the weights, default to the default cache dir of Hugging Face.
--load-format

The format of the model weights to load.

Default value: auto

Options: auto, pt, safetensors, npcache, dummy, tensorizer, sharded_state, gguf, bitsandbytes, mistral, runai_streamer

  • auto trys to load the weights in the safetensors format and fall back to the pytorch bin format if safetensors format is not available.
  • pt loads the weights in the pytorch bin format.
  • safetensors loads the weights in the safetensors format.
  • npcache loads the weights in pytorch format and store a numpy cache to speed up the loading.
  • dummy initializes the weights with random values, which is mainly for profiling.
  • tensorizer loads the weights using tensorizer from CoreWeave. See the Tensorize AI Inference Server Model script in the Examples section for more information.
  • runai_streamer loads the Safetensors weights using Run:aiModel Streamer
  • bitsandbytes loads the weights using bitsandbytes quantization.
--config-format

The format of the model config to load.

Options: auto, hf, mistral

auto trys to load the config in hf format if available, if it is not, trys to load in mistral format.

Default value: ConfigFormat.AUTO

--dtype

Data type for model weights and activations.

Default value: auto

Options: auto, half, float16, bfloat16, float, float32

  • auto uses FP16 precision for FP32 and FP16 models, and BF16 precision for BF16 models.
  • half for FP16. Recommended for AWQ quantization.
  • float16 is the same as half.
  • bfloat16 for a balance between precision and range.
  • float is shorthand for FP32 precision.
  • float32 for FP32 precision.
--kv-cache-dtype

Data type for kv cache storage. If auto, uses model data type. CUDA 11.8+ supports fp8 (=fp8_e4m3) and fp8_e5m2. ROCm (AMD GPU) supports fp8 (=fp8_e4m3)

Options: auto, fp8, fp8_e5m2, fp8_e4m3

Default value: auto

--max-model-len
Model context length. If unspecified, value is automatically derived from the model config.
--guided-decoding-backend

Which engine to use for guided decoding (JSON schema, regex, and so on) by default. Currently support outlines-dev/outlines, mlc-ai/xgrammar, and noamgat/lm-format-enforcer. Can be overridden per request via guided_decoding_backend parameter. Backend-specific options can be supplied in a comma-separated list following a colon after the backend name. Valid backends and all available options are:

  • xgrammar:no-fallback,
  • xgrammar:disable-any-whitespace,
  • outlines:no-fallback,
  • lm-format-enforcer:no-fallback

Default value: xgrammar

--logits-processor-pattern
Optional regex pattern specifying valid logits processor qualified names that can be passed with the logits_processors extra completion argument. Defaults to None, which allows no processors.
--model-impl

Which implementation of the model to use.

Default value: auto

Options: auto, vllm, transformers

  • auto trys to use the AI Inference Server implementation if it exists and fall back to the Transformers implementation if no AI Inference Server implementation is available.
  • vllm uses the AI Inference Server model implementation.
  • transformers uses the Transformers model implementation.
--distributed-executor-backend

Backend to use for distributed model workers, either ray or mp (multiprocessing). If the product of pipeline_parallel_size and tensor_parallel_size is less than or equal to the number of GPUs available, mp is used to keep processing on a single host. Otherwise, defaults to ray if Ray is installed and fail otherwise. Note that TPU only supports Ray for distributed inference.

Options: ray, mp, uni, external_launcher

--pipeline-parallel-size, -pp

Number of nodes to split the model across by dividing model layers into sequential pipeline stages.

Default value: 1

--tensor-parallel-size, -tp

Split the model across multiple GPUs to share storage and computation load.

Default value: 1

--enable-expert-parallel
Use expert parallelism instead of tensor parallelism for MoE layers.
--max-parallel-loading-workers
Load model sequentially in multiple batches, to avoid RAM OOM when using tensor parallel and large models.
--ray-workers-use-nsight
If specified, use nsight to profile Ray workers.
--block-size

Token block size for contiguous chunks of tokens. This is ignored on neuron devices and set to --max-model-len. On CUDA devices, only block sizes up to 32 are supported. On HPU devices, block size defaults to 128.

Options: 8, 16, 32, 64, 128

--enable-prefix-caching, --no-enable-prefix-caching
Enables automatic prefix caching. Use --no-enable-prefix-caching to disable explicitly.
--disable-sliding-window
Disables sliding window, capping to sliding window size.
--use-v2-block-manager
DEPRECATED: block manager v1 has been removed and SelfAttnBlockSpaceManager (block manager v2) is now the default. Setting this flag to True or False has no effect on AI Inference Server behavior.
--num-lookahead-slots

Experimental scheduling config necessary for speculative decoding. This is replaced by speculative config ins the future; it. is present to enable correctness tests until then.

Default value: 0

--seed
Random seed for operations.
--swap-space

CPU swap space size (GiB) per GPU.

Default value: 4

--cpu-offload-gb

The space in GiB to offload to CPU, per GPU. Default is 0, which means no offloading. Intuitively, this argument can be seen as a virtual way to increase the GPU memory size. For example, if you have one 24 GB GPU and set this to 10, virtually you can think of it as a 34 GB GPU. Then you can load a 13B model with BF16 weight, which requires at least 26GB GPU memory. Note that this requires fast CPU-GPU interconnect, as part of the model is loaded from CPU memory to GPU memory dynamically in each model forward pass.

Default value: 0

--gpu-memory-utilization

The fraction of GPU memory to be used for the model executor, which can range from 0 to 1. For example, a value of 0.5 would imply 50% GPU memory utilization. If unspecified, uses the default value of 0.9. This is a per-instance limit, and only applies to the current AI Inference Server instance. It does not matter if you have another AI Inference Server instance running on the same GPU. For example, if you have two AI Inference Server instances running on the same GPU, you can set the GPU memory utilization to 0.5 for each instance.

Default value: 0.9

--num-gpu-blocks-override
If specified, ignore GPU profiling result and use this number of GPU blocks. Used for testing preemption.
--max-num-batched-tokens
Maximum number of batched tokens per iteration. In vLLM, a batch is the set of all tokens from active sequences that are jointly fed into the model at each scheduler step. It is measured as "tokens per iteration" rather than "sequences per iteration".
--max-num-partial-prefills

For chunked prefill, the max number of concurrent partial prefills.Defaults to 1

Default value: 1

--max-long-partial-prefills

For chunked prefill, the maximum number of prompts longer than --long-prefill-token-threshold that is prefilled concurrently. Setting thiss less than --max-num.-partial-prefills will allow shorter prompts to jump the queue in front of longer prompts in some cases, improving latency. Defaults to 1.

Default value: 1

--long-prefill-token-threshold

For chunked prefill, a request is considered long if the prompt is longer than this number of tokens. Defaults to 4% of the model’s context length.

  • Default value: 0
--max-num-seqs
Maximum number of sequences per iteration.
--max-logprobs

Max number of log probs to return logprobs is specified in SamplingParams.

Default value: 20

--disable-log-stats
Disable logging statistics.
--quantization, -q

Method used to quantize the weights. If None, first check the quantization_config attribute in the model config file. If that is None, assume the model weights are not quantized and use dtype to determine the data type of the weights.

Options: aqlm, awq, deepspeedfp, tpu_int8, fp8, ptpc_fp8, fbgemm_fp8, modelopt, nvfp4, marlin, gguf, gptq_marlin_24, gptq_marlin, awq_marlin, gptq, compressed-tensors, bitsandbytes, qqq, hqq, experts_int8, neuron_quant, ipex, quark, moe_wna16, None

--rope-scaling
RoPE scaling configuration in JSON format. For example, {rope_type:`dynamic`,factor:2.0}
--rope-theta
RoPE theta. Use with rope_scaling. In some cases, changing the RoPE theta improves the performance of the scaled model.
--hf-overrides
Extra arguments for the HuggingFace config. This should be a JSON string that is parsed into a dictionarys.
--enforce.-eager
Always use eager-mode PyTorch. If False, uses eager mode and CUDA graph in hybrid for maximal performance and flexibility.
--max-seq-len-to-capture

Maximum sequence length covered by CUDA graphs. When a sequence has context length larger than this, AI Inference Server falls back to eager mode. Additionally for encoder-decoder models, if the sequence length of the encoder input is larger than this, AI Inference Server falls back to the eager mode.

Default value: 8192

--disable-custom-all-reduce
See ParallelConfig.
--tokenizer-pool-size

Size of tokenizer pool to use for asynchronous tokenization. If 0, uses synchronous tokenization.

Default value: 0

--tokenizer-pool-type

Type of tokenizer pool to use for asynchronous tokenization. Ignored if tokenizer_pool_size is 0.

Default value: ray

--tokenizer-pool-extra-config
Extra config for tokenizer pool. This should be a JSON string that is parsed into a dictionarys. Ignored if tokenizer_pool_size. is 0.
--limit-mm-per-prompt
For each multimodal plugin, limit how many input instances to allow for each prompt. Expects a comma-separated list of items, e.g.: image=16,video=2 allows a maximum of 16 images and 2 videos per prompt. Defaults to 1 for each modality.
--mm-processor-kwargs
Overrides for the multimodal input mapping and processing, e.g., image processor. For example: {num_crops: 4}.
--disable-mm-preprocessor-cache
If true, then disables caching of the multi-modal preprocessor and mapper. (not recommended)
--enable-lora
If True, enable handling of LoRA adapters.
--enable-lora-bias
If True, enable bias for LoRA adapters.
--max-loras

Max number of LoRAs in a single batch.

Default value: 1

--max-lora-rank

Max LoRA rank.

Default value: 16

--lora-extra-vocab-size

Maximum size of extra vocabulary that can be present in a LoRA adapter (added to the base model vocabulary).

Default value: 256

--lora-dtype

Data type for LoRA. If auto, will default to base model dtype.

Default value: auto

Options: auto, float16, bfloat16

--long-lora-scaling-factors
Specify multiple scaling factors (which can be different from base model scaling factor - see eg. Long LoRA) to allow for multiple LoRA adapters trained with those scaling factors to be used at the same time. If not specified, only adapters trained with the base model scaling factor are allowed.
--max-cpu-loras
Maximum number of LoRAs to store in CPU memory. Must be greater than max_loras. Defaults to max_loras.
--fully-sharded-loras
By default, only half of the LoRA computation is sharded with tensor parallelism. Enabling this uses the fully sharded layers. At high sequence length, max rank or tensor parallel size, this is likely faster.
--enable-prompt-adapter
If True, enable handling of PromptAdapters.
--max-prompt-adapters

Max number of PromptAdapters in a batch.

Default value: 1

--max-prompt-adapter-token

Max number of PromptAdapters tokens

Default value: 0

--device

Device type for AI Inference Server execution.

Options: auto, cuda, neuron, cpu, openvino, tpu, xpu, hpu

Default value: auto

--num-scheduler-steps

Maximum number of forward steps per scheduler call.

Default value: 1

--use-tqdm-on-load, --no-use-tqdm-on-load

Whether to enable or disable progress bar when loading model weights.

Default value: True

--multi-step-stream-outputs

If False, then multi-step will stream outputs at the end of all steps

Default value: True

--scheduler-delay-factor

Apply a delay (of delay factor multiplied by previous prompt latency) before scheduling next prompt.

Default value: 0.0

--enable-chunked-prefill
If set, the prefill requests can be chunked based on the max_num_batched_tokens.
--speculative-model
The name of the draft model to be used in speculative decoding.
--speculative-model-quantization

Method used to quantize the weights of speculative model. If None, AI Inference Server first checks the quantization_config attribute in the model config file. If that is None, AI Inference Server assumes the model weights are not quantized and use dtype to determine the data type of the weights.

Options: aqlm, awq, deepspeedfp, tpu_int8, fp8, ptpc_fp8, fbgemm_fp8, modelopt, nvfp4, marlin, gguf, gptq_marlin_24, gptq_marlin, awq_marlin, gptq, compressed-tensors, bitsandbytes, qqq, hqq, experts_int8, neuron_quant, ipex, quark, moe_wna16, None

--num-speculative-tokens
The number of speculative tokens to sample from the draft model in speculative decoding.
--speculative-disable-mqa-scorer
If set to True, the MQA scorer is disabled in speculative and falls back to batch expansion.
--speculative-draft-tensor-parallel-size, -spec-draft-tp
Number of tensor parallel replicas for the draft model in speculative decoding.
--speculative-max-model-len
The maximum sequence length supported by the draft model. Sequences over this length will skip speculation.
--speculative-disable-by-batch-size
Disable speculative decoding for new incoming requests if the number of enqueue requests is larger than this value.
--ngram-prompt-lookup-max
Max size of window for ngram prompt lookup in speculative decoding.
--ngram-prompt-lookup-min
Minimum size of window for ngram prompt lookup in speculative decoding.
--spec-decoding-acceptance-method

Specify the acceptance method to use during draft token verification in speculative decoding. Two types of acceptance routines are supported:

  1. RejectionSampler: Does not allow changing the acceptance rate of draft tokens,
  2. TypicalAcceptanceSampler: Configurable, allows for a higher acceptance rate at the cost of lower quality, and vice versa.

    Default value: rejection_sampler

    Options: rejection_sampler, typical_acceptance_sampler

--typical-acceptance-sampler-posterior-threshold
Set the lower bound threshold for the posterior probability of a token to be accepted. This threshold is used by the TypicalAcceptanceSampler to make sampling decisions during speculative decoding. Defaults to 0.09.
--typical-acceptance-sampler-posterior-alpha
A scaling factor for the entropy-based threshold for token acceptance in the TypicalAcceptanceSampler. Typically defaults to square root of --typical-acceptance-sampler-posterior-threshold, for example, 0.3.
--disable-logprobs-during-spec-decoding
If set to True, token log probabilities are not returned during speculative decoding. If set to False, log probabilities are returned according to the settings in SamplingParams. If not specified, it defaults to True. Disabling log probabilities during speculative decoding reduces latency by skipping logprob calculation in proposal sampling, target sampling, and after accepted tokens are determined.
--model-loader-extra-config
Extra config for model loader. This is passed to the model loaders corresponding to the chosen. load_format. This should be a JSON string that is parsed into a dictionarys.
--ignore.-patterns

The pattern(s) to ignore when loading the model. Defaults to original/**/* to avoid repeated loading of llama’s checkpoints.

Default value: []

--preemption-mode
If recompute, the engine performs preemption by recomputing; If swap, the engine performs preemption by block swapping.
--served-model-name
The model name(s) used in the API. If multiple names are provided, the server will respond to any of the provided names. The model name in the model field of a response is the first name in this list. If not specified, the model name is the same as the --models argument. Note. that the name(s) is also be used in model_name tag content for Prometheus metrics. If multiple names are provided, metrics tag will take the first one.
--qlora-adapter-name-or-path
Name or path of the QLoRA adapter.
--show-hidden-metrics-for-version
Enable deprecated Prometheus metrics that have been hidden since the specified version. For example, if a previously deprecated metric has been hidden since the v0.7.0 release, you use --show-hidden-metrics-for-version=0.7 as a temporary escape hatch while you migrate to new metrics. The metric is likely to be removed completely in an upcoming release.
--otlp-traces-endpoint
Target URL to which OpenTelemetry traces is sent.
--collects-detailed-traces
Valid choices are model, worker, all. It makes sense to set this only if --otlp-traces-endpoint is set. If set, server collects detailed traces for the specified modules. This involves use of possibly costly and or blocking operations and hence might have a performance impact.
--disable-async-output-proc
Disable async output processing. This may result in lower performance.
--scheduling-policy

The scheduling policy to use. fcfs (first come first served, requests are handled in order of arrival; default) or priority (requests are handled based on given priority, lower value means earlier handling; and time of arrival deciding any ties).

Default value: fcfs

Options: fcfs, priority

--scheduler-cls

The scheduler class to use. vllm.core.scheduler.Scheduler is the default scheduler. Can be a class directly or the path to a class of form mod.custom_class.

Default value: vllm.core.scheduler.Scheduler

--override-neuron-config
Override or set neuron device configuration, for example, {cast_logits_dtype: bloat16}.
--override-pooler-config
Override or set the pooling method for pooling models, for example, {pooling_type: mean, normalize: false}.
--compilation-config, -O
torch.compile configuration for the model. When it is a number (0, 1, 2, 3), it is interpreted as the optimizations level. NOTE:. level 0 is the default level without any optimization. level 1 and 2 are for internal testing only. level 3 is the recommended level for production. To specify the full compilation config, use a JSON string. Following the convention of traditional compilers, using -O without space is also supported. -O3 is equivalent to -O 3.
--kv-transfer-config
The configurations for distributed KV cache transfer. Should be a JSON string.
--worker-cls

The worker class to use for distributed execution.

Default value: auto

--worker-extension-cls
The worker extension class on top of the worker cls, it is useful if you just want to add new functions to the worker class without changing the existing functions.
--generation-config

The folder path to the generation config. Defaults to auto, the generation config is loaded from model paths. If set to. vllm, no generation config is loaded, AI Inference Server defaults is used. If set to a folder path, the generation config is loaded from the specified folders path. If max_new_tokens. is specified in generation config, then it sets a server-wide limit on the number of output tokens for all requests.

Default value: auto

--override-generation-config
Overrides or sets generation config in JSON format, for example, {temperature: 0.5}. If used with --generation-config=auto, the override parameters is merged with the default configs from the model. If generation-config is None, only the override parameters are used.
--enable-sleep-mode
Enable sleep mode for the engine. Only supported for CUDA platform.
--calculate-kv-scales
This enables dynamic calculation of k_scale and v_scale when kv-cache-dtype is fp8. If calculate-kv-scales is false, the scales are loaded from the model checkpoint if available. Otherwise, the scales default to 1.0.
--additional-config
Additional config for specified platform in JSON format. Different platforms may support different configs. Make sure the configs are valid for the platform you are using. The input format is like {<config_key>: <config_value>}
--enable-reasoning
Whether to enable reasoning_content for the model. If enabled, the model is able to generate reasonings content.
.--reasoning-parser

Select the reasoning parser depending on the model that you are using. This is used to parse the reasoning content into OpenAI API format. Required for --enable-reasoning.

Options: deepseek_r1

--chat-template
Pass a Jinja2 template that specifies how are roles, messages, and other chat-specific tokens are encoded in the input. For more information, see Chat Template.
--tool-call-parser
Options: deepseek_v3, granite-20b-fc, granite, hermes, internlm, jamba, llama4_json, llama3_json, mistral, phi4_mini_json, pythonic, or name registered in --tool-parser-plugin.
--cuda-graph-sizes

CUDA graph capture sizes, default is 512. If one value is provided, then the capture list would follow the pattern: [1, 2, 4] + [i for i in range(8, cuda_graph_sizes + 1, 8)] more than one value (e.g. 1 2 128) is provided, then the capture list will follow the provided list.

Default: 512

--data-parallel-address, -dpa
Address of the data parallel cluster head-node.
--data-parallel-rpc-port, -dpp
Port for data parallel RPC communication.
--data-parallel-size, -dp

Number of data parallel groups. MoE layers are sharded according to the product of the tensor parallel size and data parallel size.

Default: 1

--data-parallel-size-local, -dpl
Number of data parallel replicas to run on this node.
--disable-cascade-attn, --no-disable-cascade-attn

Disable cascade attention for V1. While cascade attention does not change the mathematical correctness, disabling it could be useful for preventing potential numerical issues. Note that even if this is set to False, cascade attention is only used when the heuristics tells that it is beneficial.

Default: False

--disable-chunked-mm-input, --no-disable-chunked-mm-input

If set to true and chunked prefill is enabled, do not partially schedule a multimodal item. Only used in V1. This ensures that if a request has a mixed prompt (for example, text tokens TTTT followed by image tokens IIIIIIIIII) where only some image tokens can be scheduled (for example, TTTTIIIII, leaving IIIII), The item is scheduled as TTTT in one step and IIIIIIIIII in the next.

Default: False

--enable-prompt-embeds, --no-enable-prompt-embeds

If True, enables passing text embeddings as inputs via the prompt_embeds key. Note that enabling this will double the time required for graph compilation.

Default: False

--enable-prompt-embeds, --no-enable-prompt-embeds

If True, enables passing text embeddings as inputs via the prompt_embeds key. Note that enabling this will double the time required for graph compilation.

Default: False

--guided-decoding-disable-additional-properties, --no-guided-decoding-disable-additional-properties

If True, the guidance backend will not use additionalProperties in the JSON schema. This is only supported for the guidance backend and is used to better align its behaviour with outlines and xgrammar.

Default: False

--guided-decoding-disable-any-whitespace, ::--no-guided-decoding-disable-any-whitespace

If True, the model will not generate any whitespace during guided decoding. This is only supported for xgrammar and guidance backends.

Default: False

--guided-decoding-disable-fallback, --no-guided-decoding-disable-fallback

If True, vLLM will not fallback to a different backend on error.

Default: False

--hf-token
The token to use as HTTP bearer authorization for remote files . If True, uses the token generated when running huggingface-cli login, stored in ~/.huggingface.
--kv-events-config
The configuration for event publishing. Should either be a valid JSON string or JSON keys passed individually.
--prefix-caching-hash-algo

Set the hash algorithm for prefix caching:

Options: builtin, sha256

  • builtin is Python’s built-in hash.
  • sha256 is collision resistant but with certain overheads.

Default: builtin

--pt-load-map-location

Map location for loading pytorch checkpoint, to support loading checkpoints can only be loaded on certain devices like cuda, this is equivalent to {": "cuda"}. Another supported format is mapping from different devices like from GPU 1 to GPU 0: {"cuda:1": "cuda:0"}. Note that when passed from command line, the strings in dictionary needs to be double quoted for json parsing. For more details, see original doc for map_location in https://pytorch.org/docs/stable/generated/torch.load.html

Default: cpu

--speculative-config
The configurations for speculative decoding. Should be a JSON string.
--ssl-keyfile
Location of your TLS private key in PEM format.

2.2. Async engine arguments

usage: vllm serve [-h] [--disable-log-requests]
Copy to Clipboard Toggle word wrap
--disable-log-requests
Disable logging requests.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat