Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 2. vLLM server usage
vllm [-h] [-v] {chat,complete,serve,bench,collect-env,run-batch}
$ vllm [-h] [-v] {chat,complete,serve,bench,collect-env,run-batch}
- chat
- Generate chat completions via the running API server.
- complete
- Generate text completions based on the given prompt via the running API server.
- serve
- Start the vLLM OpenAI Compatible API server.
- bench
- vLLM bench subcommand.
- collect-env
- Start collecting environment information.
- run-batch
- Run batch prompts and write results to file.
2.1. vllm serve arguments Copier lienLien copié sur presse-papiers!
vllm serve
launches a local server that loads and serves the language model.
2.1.1. JSON CLI arguments Copier lienLien copié sur presse-papiers!
-
--json-arg '{"key1": "value1", "key2": {"key3": "value2"}}'
-
--json-arg.key1 value1 --json-arg.key2.key3 value2
Additionally, list elements can be passed individually using +
:
-
--json-arg '{"key4": ["value3", "value4", "value5"]}'
-
--json-arg.key4+ value3 --json-arg.key4+='value4,value5'
2.1.2. Options Copier lienLien copié sur presse-papiers!
2.1.2.1. --headless Copier lienLien copié sur presse-papiers!
Run in headless mode. See multi-node data parallel documentation for more details.
Default: False
2.1.2.2. --api-server-count, -asc Copier lienLien copié sur presse-papiers!
How many API server processes to run.
Default: 1
2.1.2.3. --config Copier lienLien copié sur presse-papiers!
Read CLI options from a config file. Must be a YAML with the following options: https://docs.vllm.ai/en/latest/configuration/serve_args.html
Default: None
2.1.2.4. --disable-log-stats Copier lienLien copié sur presse-papiers!
Disable logging statistics.
Default: False
2.1.2.5. --enable-log-requests, --no-enable-log-requests Copier lienLien copié sur presse-papiers!
Enable logging requests.
Default: False
2.1.2.6. --disable-log-requests, --no-disable-log-requests Copier lienLien copié sur presse-papiers!
This argument is deprecated.
Disable logging requests.
Default: True
2.1.3. Frontend Copier lienLien copié sur presse-papiers!
Arguments for the OpenAI-compatible frontend server.
2.1.3.1. --host Copier lienLien copié sur presse-papiers!
Host name.
Default: None
2.1.3.2. --port Copier lienLien copié sur presse-papiers!
Port number.
Default: 8000
2.1.3.3. --uds Copier lienLien copié sur presse-papiers!
Unix domain socket path. If set, host and port arguments are ignored.
Default: None
2.1.3.4. --uvicorn-log-level Copier lienLien copié sur presse-papiers!
Possible choices: critical
, debug
, error
, info
, trace
, warning
Log level for uvicorn.
Default: info
2.1.3.5. --disable-uvicorn-access-log, --no-disable-uvicorn-access-log Copier lienLien copié sur presse-papiers!
Disable uvicorn access log.
Default: False
2.1.3.6. --allow-credentials, --no-allow-credentials Copier lienLien copié sur presse-papiers!
Allow credentials.
Default: False
2.1.3.7. --allowed-origins Copier lienLien copié sur presse-papiers!
Allowed origins.
Default: ['*']
2.1.3.8. --allowed-methods Copier lienLien copié sur presse-papiers!
Allowed methods.
Default: ['*']
2.1.3.9. --allowed-headers Copier lienLien copié sur presse-papiers!
Allowed headers.
Default: ['*']
2.1.3.10. --api-key Copier lienLien copié sur presse-papiers!
If provided, the server will require one of these keys to be presented in the header.
Default: None
2.1.3.11. --lora-modules Copier lienLien copié sur presse-papiers!
LoRA modules configurations in either 'name=path'
format or JSON format or JSON list format. Example (old format): 'name=path'
Example (new format): {"name": "name", "path": "lora_path", "base_model_name": "id"}
Default: None
2.1.3.12. --chat-template Copier lienLien copié sur presse-papiers!
The file path to the chat template, or the template in single-line form for the specified model.
Default: None
2.1.3.13. --chat-template-content-format Copier lienLien copié sur presse-papiers!
Possible choices: auto
, openai
, string
The format to render message content within a chat template.
-
"string" will render the content as a string. Example:
"Hello World"
-
"openai" will render the content as a list of dictionaries, similar to OpenAI schema. Example:
[{"type": "text", "text": "Hello world!"}]
Default: auto
2.1.3.14. --trust-request-chat-template, --no-trust-request-chat-template Copier lienLien copié sur presse-papiers!
Whether to trust the chat template provided in the request. If False, the server will always use the chat template specified by --chat-template
or the ones from tokenizer.
Default: False
2.1.3.15. --response-role Copier lienLien copié sur presse-papiers!
The role name to return if request.add_generation_prompt=true
.
Default: assistant
2.1.3.16. --ssl-keyfile Copier lienLien copié sur presse-papiers!
The file path to the SSL key file.
Default: None
2.1.3.17. --ssl-certfile Copier lienLien copié sur presse-papiers!
The file path to the SSL cert file.
Default: None
2.1.3.18. --ssl-ca-certs Copier lienLien copié sur presse-papiers!
The CA certificates file.
Default: None
2.1.3.19. --enable-ssl-refresh, --no-enable-ssl-refresh Copier lienLien copié sur presse-papiers!
Refresh SSL Context when SSL certificate files change
Default: False
2.1.3.20. --ssl-cert-reqs Copier lienLien copié sur presse-papiers!
Whether client certificate is required (see stdlib ssl module’s).
Default: 0
2.1.3.21. --root-path Copier lienLien copié sur presse-papiers!
FastAPI root_path when app is behind a path based routing proxy.
Default: None
2.1.3.22. --middleware Copier lienLien copié sur presse-papiers!
Additional ASGI middleware to apply to the app. We accept multiple --middleware arguments. The value should be an import path. If a function is provided, vLLM will add it to the server using @app.middleware('http')
. If a class is provided, vLLM will add it to the server using app.add_middleware()
.
Default: []
2.1.3.23. --return-tokens-as-token-ids, --no-return-tokens-as-token-ids Copier lienLien copié sur presse-papiers!
When --max-logprobs
is specified, represents single tokens as strings of the form 'token_id:{token_id}' so that tokens that are not JSON-encodable can be identified.
Default: False
2.1.3.24. --disable-frontend-multiprocessing, --no-disable-frontend-multiprocessing Copier lienLien copié sur presse-papiers!
If specified, will run the OpenAI frontend server in the same process as the model serving engine.
Default: False
2.1.3.25. --enable-request-id-headers, --no-enable-request-id-headers Copier lienLien copié sur presse-papiers!
If specified, API server will add X-Request-Id header to responses.
Default: False
2.1.3.26. --enable-auto-tool-choice, --no-enable-auto-tool-choice Copier lienLien copié sur presse-papiers!
Enable auto tool choice for supported models. Use --tool-call-parser
to specify which parser to use.
Default: False
2.1.3.27. --exclude-tools-when-tool-choice-none, --no-exclude-tools-when-tool-choice-none Copier lienLien copié sur presse-papiers!
If specified, exclude tool definitions in prompts when tool_choice='none'.
Default: False
2.1.3.28. --tool-call-parser Copier lienLien copié sur presse-papiers!
Select the tool call parser depending on the model that you’re using. This is used to parse the model-generated tool call into OpenAI API format. Required for --enable-auto-tool-choice
. You can choose any option from the built-in parsers or register a plugin via --tool-parser-plugin
.
Default: None
2.1.3.29. --tool-parser-plugin Copier lienLien copié sur presse-papiers!
Special the tool parser plugin write to parse the model-generated tool into OpenAI API format, the name register in this plugin can be used in --tool-call-parser
.
Default: ``
2.1.3.30. --tool-server Copier lienLien copié sur presse-papiers!
Comma-separated list of host:port pairs (IPv4, IPv6, or hostname). Examples: 127.0.0.1:8000, [::1]:8000, localhost:1234. Or demo
for demo purpose.
Default: None
2.1.3.31. --log-config-file Copier lienLien copié sur presse-papiers!
Path to logging config JSON file for both vllm and uvicorn
Default: None
2.1.3.32. --max-log-len Copier lienLien copié sur presse-papiers!
Max number of prompt characters or prompt ID numbers being printed in log. The default of None means unlimited.
Default: None
2.1.3.33. --disable-fastapi-docs, --no-disable-fastapi-docs Copier lienLien copié sur presse-papiers!
Disable FastAPI’s OpenAPI schema, Swagger UI, and ReDoc endpoint.
Default: False
2.1.3.34. --enable-prompt-tokens-details, --no-enable-prompt-tokens-details Copier lienLien copié sur presse-papiers!
If set to True, enable prompt_tokens_details in usage.
Default: False
2.1.3.35. --enable-server-load-tracking, --no-enable-server-load-tracking Copier lienLien copié sur presse-papiers!
If set to True, enable tracking server_load_metrics in the app state.
Default: False
2.1.3.36. --enable-force-include-usage, --no-enable-force-include-usage Copier lienLien copié sur presse-papiers!
If set to True, including usage on every request.
Default: False
2.1.3.37. --enable-tokenizer-info-endpoint, --no-enable-tokenizer-info-endpoint Copier lienLien copié sur presse-papiers!
Enable the /get_tokenizer_info endpoint. May expose chat templates and other tokenizer configuration.
Default: False
2.1.3.38. --enable-log-outputs, --no-enable-log-outputs Copier lienLien copié sur presse-papiers!
If True, log model outputs (generations). Requires --enable-log-requests.
Default: False
2.1.3.39. --h11-max-incomplete-event-size Copier lienLien copié sur presse-papiers!
Maximum size (bytes) of an incomplete HTTP event (header or body) for h11 parser. Helps mitigate header abuse.
Default: 4194304
(4 MB)
2.1.3.40. --h11-max-header-count Copier lienLien copié sur presse-papiers!
Maximum number of HTTP headers allowed in a request for h11 parser. Helps mitigate header abuse.s
Default: 256
2.1.3.41. --log-error-stack, --no-log-error-stack Copier lienLien copié sur presse-papiers!
If set to True, log the stack trace of error responses
Default: False
2.1.4. ModelConfig Copier lienLien copié sur presse-papiers!
Configuration for the model.
2.1.4.1. --model Copier lienLien copié sur presse-papiers!
Name or path of the Hugging Face model to use. It is also used as the content for model_name
tag in metrics output when served_model_name
is not specified.
Default: Qwen/Qwen3-0.6B
2.1.4.2. --runner Copier lienLien copié sur presse-papiers!
Possible choices: auto
, draft
, generate
, pooling
The type of model runner to use. Each vLLM instance only supports one model runner, even if the same model can be used for multiple types.
Default: auto
2.1.4.3. --convert Copier lienLien copié sur presse-papiers!
Possible choices: auto
, classify
, embed
, none
, reward
Convert the model using adapters defined in [vllm.model_executor.models.adapters][]. The most common use case is to adapt a text generation model to be used for pooling tasks.
Default: auto
2.1.4.4. --task Copier lienLien copié sur presse-papiers!
Possible choices: auto
, classify
, draft
, embed
, embedding
, generate
, reward
, score
, transcription
, None
This argument is deprecated.
The task to use the model for. If the model supports more than one model runner, this is used to select which model runner to run.
Note that the model may support other tasks using the same model runner.
Default: None
2.1.4.5. --tokenizer Copier lienLien copié sur presse-papiers!
Name or path of the Hugging Face tokenizer to use. If unspecified, model name or path will be used.
Default: None
2.1.4.6. --tokenizer-mode Copier lienLien copié sur presse-papiers!
Possible choices: auto
, custom
, mistral
, slow
Tokenizer mode:
- "auto" will use the fast tokenizer if available.
- "slow" will always use the slow tokenizer.
-
"mistral" will always use the tokenizer from
mistral_common
. - "custom" will use --tokenizer to select the preregistered tokenizer.
Default: auto
2.1.4.7. --trust-remote-code, --no-trust-remote-code Copier lienLien copié sur presse-papiers!
Trust remote code (e.g., from HuggingFace) when downloading the model and tokenizer.
Default: False
2.1.4.8. --dtype Copier lienLien copié sur presse-papiers!
Possible choices: auto
, bfloat16
, float
, float16
, float32
, half
Data type for model weights and activations:
- "auto" will use FP16 precision for FP32 and FP16 models, and BF16 precision for BF16 models.
- "half" for FP16. Recommended for AWQ quantization.
- "float16" is the same as "half".
- "bfloat16" for a balance between precision and range.
- "float" is shorthand for FP32 precision.
- "float32" for FP32 precision.
Default: auto
2.1.4.9. --seed Copier lienLien copié sur presse-papiers!
Random seed for reproducibility. Initialized to None in V0, but initialized to 0 in V1.
Default: None
2.1.4.10. --hf-config-path Copier lienLien copié sur presse-papiers!
Name or path of the Hugging Face config to use. If unspecified, model name or path will be used.
Default: None
2.1.4.11. --allowed-local-media-path Copier lienLien copié sur presse-papiers!
Allowing API requests to read local images or videos from directories specified by the server file system. This is a security risk. Should only be enabled in trusted environments.
Default: ``
2.1.4.12. --allowed-media-domains Copier lienLien copié sur presse-papiers!
If set, only media URLs that belong to this domain can be used for multi-modal inputs.
Default: None
2.1.4.13. --revision Copier lienLien copié sur presse-papiers!
The specific model version to use. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.
Default: None
2.1.4.14. --code-revision Copier lienLien copié sur presse-papiers!
The specific revision to use for the model code on the Hugging Face Hub. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.
Default: None
2.1.4.15. --rope-scaling Copier lienLien copié sur presse-papiers!
RoPE scaling configuration. For example, {"rope_type":"dynamic","factor":2.0}
.
Should either be a valid JSON string or JSON keys passed individually.
Default: {}
2.1.4.16. --rope-theta Copier lienLien copié sur presse-papiers!
RoPE theta. Use with rope_scaling
. In some cases, changing the RoPE theta improves the performance of the scaled model.
Default: None
2.1.4.17. --tokenizer-revision Copier lienLien copié sur presse-papiers!
The specific revision to use for the tokenizer on the Hugging Face Hub. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.
Default: None
2.1.4.18. --max-model-len Copier lienLien copié sur presse-papiers!
Model context length (prompt and output). If unspecified, will be automatically derived from the model config.
When passing via --max-model-len
, supports k/m/g/K/M/G in human-readable format. Examples:
- 1k -> 1000
- 1K -> 1024
- 25.6k -> 25,600
Parse human-readable integers like '1k', '2M', etc. Including decimal values with decimal multipliers.
Examples: - '1k' -> 1,000 - '1K' -> 1,024 - '25.6k' -> 25,600
Examples:
- '1k' -> 1,000
- '1K' -> 1,024
- '25.6k' -> 25,600
Default: None
2.1.4.19. --quantization, -q Copier lienLien copié sur presse-papiers!
Method used to quantize the weights. If None
, we first check the quantization_config
attribute in the model config file. If that is None
, we assume the model weights are not quantized and use dtype
to determine the data type of the weights.
Default: None
2.1.4.20. --enforce-eager, --no-enforce-eager Copier lienLien copié sur presse-papiers!
Whether to always use eager-mode PyTorch. If True, we will disable CUDA graph and always execute the model in eager mode. If False, we will use CUDA graph and eager execution in hybrid for maximal performance and flexibility.
Default: False
2.1.4.21. --max-logprobs Copier lienLien copié sur presse-papiers!
Maximum number of log probabilities to return when logprobs
is specified in SamplingParams
. The default value comes the default for the OpenAI Chat Completions API. -1 means no cap, i.e. all (output_length * vocab_size) logprobs are allowed to be returned and it may cause OOM.
Default: 20
2.1.4.22. --logprobs-mode Copier lienLien copié sur presse-papiers!
Possible choices: processed_logits
, processed_logprobs
, raw_logits
, raw_logprobs
Indicates the content returned in the logprobs and prompt_logprobs. Supported mode: 1) raw_logprobs, 2) processed_logprobs, 3) raw_logits, 4) processed_logits. Raw means the values before applying any logit processors, like bad words. Processed means the values after applying all processors, including temperature and top_k/top_p.
Default: raw_logprobs
2.1.4.23. --disable-sliding-window, --no-disable-sliding-window Copier lienLien copié sur presse-papiers!
Whether to disable sliding window. If True, we will disable the sliding window functionality of the model, capping to sliding window size. If the model does not support sliding window, this argument is ignored.
Default: False
2.1.4.24. --disable-cascade-attn, --no-disable-cascade-attn Copier lienLien copié sur presse-papiers!
Disable cascade attention for V1. While cascade attention does not change the mathematical correctness, disabling it could be useful for preventing potential numerical issues. Note that even if this is set to False, cascade attention will be only used when the heuristic tells that it’s beneficial.
Default: False
2.1.4.25. --skip-tokenizer-init, --no-skip-tokenizer-init Copier lienLien copié sur presse-papiers!
Skip initialization of tokenizer and detokenizer. Expects valid prompt_token_ids
and None
for prompt from the input. The generated output will contain token ids.
Default: False
2.1.4.26. --enable-prompt-embeds, --no-enable-prompt-embeds Copier lienLien copié sur presse-papiers!
If True
, enables passing text embeddings as inputs via the prompt_embeds
key. Note that enabling this will double the time required for graph compilation.
Default: False
2.1.4.27. --served-model-name Copier lienLien copié sur presse-papiers!
The model name(s) used in the API. If multiple names are provided, the server will respond to any of the provided names. The model name in the model field of a response will be the first name in this list. If not specified, the model name will be the same as the --model
argument. Noted that this name(s) will also be used in model_name
tag content of prometheus metrics, if multiple names provided, metrics tag will take the first one.
Default: None
2.1.4.28. --config-format Copier lienLien copié sur presse-papiers!
Possible choices: auto
, hf
, mistral
The format of the model config to load:
- "auto" will try to load the config in hf format if available else it will try to load in mistral format.
- "hf" will load the config in hf format.
- "mistral" will load the config in mistral format.
Default: auto
2.1.4.29. --hf-token Copier lienLien copié sur presse-papiers!
The token to use as HTTP bearer authorization for remote files . If True
, will use the token generated when running huggingface-cli login
(stored in ~/.huggingface
).
Default: None
2.1.4.30. --hf-overrides Copier lienLien copié sur presse-papiers!
If a dictionary, contains arguments to be forwarded to the Hugging Face config. If a callable, it is called to update the HuggingFace config.
Default: {}
2.1.4.31. --pooler-config Copier lienLien copié sur presse-papiers!
Pooler config which controls the behaviour of output pooling in pooling models.
Should either be a valid JSON string or JSON keys passed individually.
Default: None
2.1.4.32. --override-pooler-config Copier lienLien copié sur presse-papiers!
This argument is deprecated.
Use pooler_config
instead. This field will be removed in v0.12.0 or v1.0.0, whichever is sooner.
Should either be a valid JSON string or JSON keys passed individually.
Default: None
2.1.4.33. --logits-processor-pattern Copier lienLien copié sur presse-papiers!
Optional regex pattern specifying valid logits processor qualified names that can be passed with the logits_processors
extra completion argument. Defaults to None
, which allows no processors.
Default: None
2.1.4.34. --generation-config Copier lienLien copié sur presse-papiers!
The folder path to the generation config. Defaults to "auto"
, the generation config will be loaded from model path. If set to "vllm"
, no generation config is loaded, vLLM defaults will be used. If set to a folder path, the generation config will be loaded from the specified folder path. If max_new_tokens
is specified in generation config, then it sets a server-wide limit on the number of output tokens for all requests.
Default: auto
2.1.4.35. --override-generation-config Copier lienLien copié sur presse-papiers!
Overrides or sets generation config. For example, {"temperature": 0.5}
. If used with --generation-config auto
, the override parameters will be merged with the default config from the model. If used with --generation-config vllm
, only the override parameters are used.
Should either be a valid JSON string or JSON keys passed individually.
Default: {}
2.1.4.36. --enable-sleep-mode, --no-enable-sleep-mode Copier lienLien copié sur presse-papiers!
Enable sleep mode for the engine (only CUDA platform is supported).
Default: False
2.1.4.37. --model-impl Copier lienLien copié sur presse-papiers!
Possible choices: auto
, terratorch
, transformers
, vllm
Which implementation of the model to use:
- "auto" will try to use the vLLM implementation, if it exists, and fall back to the Transformers implementation if no vLLM implementation is available.
- "vllm" will use the vLLM model implementation.
- "transformers" will use the Transformers model implementation.
- "terratorch" will use the TerraTorch model implementation.
Default: auto
2.1.4.38. --override-attention-dtype Copier lienLien copié sur presse-papiers!
Override dtype for attention
Default: None
2.1.4.39. --logits-processors Copier lienLien copié sur presse-papiers!
One or more logits processors' fully-qualified class names or class definitions
Default: None
2.1.4.40. --io-processor-plugin Copier lienLien copié sur presse-papiers!
IOProcessor plugin name to load at model startup
Default: None
2.1.5. LoadConfig Copier lienLien copié sur presse-papiers!
Configuration for loading the model weights.
2.1.5.1. --load-format Copier lienLien copié sur presse-papiers!
The format of the model weights to load:
- "auto" will try to load the weights in the safetensors format and fall back to the pytorch bin format if safetensors format is not available.
- "pt" will load the weights in the pytorch bin format.
- "safetensors" will load the weights in the safetensors format.
- "npcache" will load the weights in pytorch format and store a numpy cache to speed up the loading.
- "dummy" will initialize the weights with random values, which is mainly for profiling.
- "tensorizer" will use CoreWeave’s tensorizer library for fast weight loading. See the Tensorize vLLM Model script in the Examples section for more information.
- "runai_streamer" will load the Safetensors weights using Run:ai Model Streamer.
- "bitsandbytes" will load the weights using bitsandbytes quantization.
- "sharded_state" will load weights from pre-sharded checkpoint files, supporting efficient loading of tensor-parallel models.
- "gguf" will load weights from GGUF format files (details specified in https://github.com/ggml-org/ggml/blob/master/docs/gguf.md).
- "mistral" will load weights from consolidated safetensors files used by Mistral models.
- Other custom values can be supported via plugins.
Default: auto
2.1.5.2. --download-dir Copier lienLien copié sur presse-papiers!
Directory to download and load the weights, default to the default cache directory of Hugging Face.
Default: None
2.1.5.3. --safetensors-load-strategy Copier lienLien copié sur presse-papiers!
Specifies the loading strategy for safetensors weights.
- "lazy" (default): Weights are memory-mapped from the file. This enables on-demand loading and is highly efficient for models on local storage.
- "eager": The entire file is read into CPU memory upfront before loading. This is recommended for models on network filesystems (e.g., Lustre, NFS) as it avoids inefficient random reads, significantly speeding up model initialization. However, it uses more CPU RAM.
Default: lazy
2.1.5.4. --model-loader-extra-config Copier lienLien copié sur presse-papiers!
Extra config for model loader. This will be passed to the model loader corresponding to the chosen load_format.
Default: {}
2.1.5.5. --ignore-patterns Copier lienLien copié sur presse-papiers!
The list of patterns to ignore when loading the model. Default to "original/*/" to avoid repeated loading of llama’s checkpoints.
Default: None
2.1.5.6. --use-tqdm-on-load, --no-use-tqdm-on-load Copier lienLien copié sur presse-papiers!
Whether to enable tqdm for showing progress bar when loading model weights.
Default: True
2.1.5.7. --pt-load-map-location Copier lienLien copié sur presse-papiers!
pt_load_map_location: the map location for loading pytorch checkpoint, to support loading checkpoints can only be loaded on certain devices like "cuda", this is equivalent to {"": "cuda"}. Another supported format is mapping from different devices like from GPU 1 to GPU 0: {"cuda:1": "cuda:0"}. Note that when passed from command line, the strings in dictionary needs to be double quoted for json parsing. For more details, see original doc for map_location
in https://pytorch.org/docs/stable/generated/torch.load.html
Default: cpu
2.1.6. StructuredOutputsConfig Copier lienLien copié sur presse-papiers!
Dataclass which contains structured outputs config for the engine.
2.1.6.1. --reasoning-parser Copier lienLien copié sur presse-papiers!
Possible choices: deepseek_r1
, glm45
, openai_gptoss
, granite
, hunyuan_a13b
, mistral
, qwen3
, seed_oss
, step3
Select the reasoning parser depending on the model that you’re using. This is used to parse the reasoning content into OpenAI API format.
Default: ``
2.1.6.2. --guided-decoding-backend Copier lienLien copié sur presse-papiers!
This argument is deprecated.
--guided-decoding-backend will be removed in v0.12.0.
Default: None
2.1.6.3. --guided-decoding-disable-fallback Copier lienLien copié sur presse-papiers!
This argument is deprecated.
--guided-decoding-disable-fallback will be removed in v0.12.0.
Default: None
2.1.6.4. --guided-decoding-disable-any-whitespace Copier lienLien copié sur presse-papiers!
This argument is deprecated.
--guided-decoding-disable-any-whitespace will be removed in v0.12.0.
Default: None
2.1.6.5. --guided-decoding-disable-additional-properties Copier lienLien copié sur presse-papiers!
This argument is deprecated.
--guided-decoding-disable-additional-properties will be removed in v0.12.0.
Default: None
2.1.7. ParallelConfig Copier lienLien copié sur presse-papiers!
Configuration for the distributed execution.
2.1.7.1. --distributed-executor-backend Copier lienLien copié sur presse-papiers!
Possible choices: external_launcher
, mp
, ray
, uni
Backend to use for distributed model workers, either "ray" or "mp" (multiprocessing). If the product of pipeline_parallel_size and tensor_parallel_size is less than or equal to the number of GPUs available, "mp" will be used to keep processing on a single host. Otherwise, this will default to "ray" if Ray is installed and fail otherwise. Note that tpu only support Ray for distributed inference.
Default: None
2.1.7.2. --pipeline-parallel-size, -pp Copier lienLien copié sur presse-papiers!
Number of pipeline parallel groups.
Default: 1
2.1.7.3. --tensor-parallel-size, -tp Copier lienLien copié sur presse-papiers!
Number of tensor parallel groups.
Default: 1
2.1.7.4. --decode-context-parallel-size, -dcp Copier lienLien copié sur presse-papiers!
Number of decode context parallel groups, because the world size does not change by dcp, it simply reuse the GPUs of TP group, and tp_size needs to be divisible by dcp_size.
Default: 1
2.1.7.5. --data-parallel-size, -dp Copier lienLien copié sur presse-papiers!
Number of data parallel groups. MoE layers will be sharded according to the product of the tensor parallel size and data parallel size.
Default: 1
2.1.7.6. --data-parallel-rank, -dpn Copier lienLien copié sur presse-papiers!
Data parallel rank of this instance. When set, enables external load balancer mode.
Default: None
2.1.7.7. --data-parallel-start-rank, -dpr Copier lienLien copié sur presse-papiers!
Starting data parallel rank for secondary nodes.
Default: None
2.1.7.8. --data-parallel-size-local, -dpl Copier lienLien copié sur presse-papiers!
Number of data parallel replicas to run on this node.
Default: None
2.1.7.9. --data-parallel-address, -dpa Copier lienLien copié sur presse-papiers!
Address of data parallel cluster head-node.
Default: None
2.1.7.10. --data-parallel-rpc-port, -dpp Copier lienLien copié sur presse-papiers!
Port for data parallel RPC communication.
Default: None
2.1.7.11. --data-parallel-backend, -dpb Copier lienLien copié sur presse-papiers!
Backend for data parallel, either "mp" or "ray".
Default: mp
2.1.7.12. --data-parallel-hybrid-lb, --no-data-parallel-hybrid-lb Copier lienLien copié sur presse-papiers!
Whether to use "hybrid" DP LB mode. Applies only to online serving and when data_parallel_size > 0. Enables running an AsyncLLM and API server on a "per-node" basis where vLLM load balances between local data parallel ranks, but an external LB balances between vLLM nodes/replicas. Set explicitly in conjunction with --data-parallel-start-rank.
Default: False
2.1.7.13. --enable-expert-parallel, --no-enable-expert-parallel Copier lienLien copié sur presse-papiers!
Use expert parallelism instead of tensor parallelism for MoE layers.
Default: False
2.1.7.14. --enable-dbo, --no-enable-dbo Copier lienLien copié sur presse-papiers!
Enable dual batch overlap for the model executor.
Default: False
2.1.7.15. --dbo-decode-token-threshold Copier lienLien copié sur presse-papiers!
The threshold for dual batch overlap for batches only containing decodes. If the number of tokens in the request is greater than this threshold, microbatching will be used. Otherwise, the request will be processed in a single batch.
Default: 32
2.1.7.16. --dbo-prefill-token-threshold Copier lienLien copié sur presse-papiers!
The threshold for dual batch overlap for batches that contain one or more prefills. If the number of tokens in the request is greater than this threshold, microbatching will be used. Otherwise, the request will be processed in a single batch.
Default: 512
2.1.7.17. --enable-eplb, --no-enable-eplb Copier lienLien copié sur presse-papiers!
Enable expert parallelism load balancing for MoE layers.
Default: False
2.1.7.18. --eplb-config Copier lienLien copié sur presse-papiers!
Expert parallelism configuration.
Should either be a valid JSON string or JSON keys passed individually.
Default: EPLBConfig(window_size=1000, step_interval=3000, num_redundant_experts=0, log_balancedness=False)
2.1.7.19. --expert-placement-strategy Copier lienLien copié sur presse-papiers!
Possible choices: linear
, round_robin
The expert placement strategy for MoE layers:
- "linear": Experts are placed in a contiguous manner. For example, with 4 experts and 2 ranks, rank 0 will have experts [0, 1] and rank 1 will have experts [2, 3].
- "round_robin": Experts are placed in a round-robin manner. For example, with 4 experts and 2 ranks, rank 0 will have experts [0, 2] and rank 1 will have experts [1, 3]. This strategy can help improve load balancing for grouped expert models with no redundant experts.
Default: linear
2.1.7.20. --num-redundant-experts Copier lienLien copié sur presse-papiers!
This argument is deprecated.
--num-redundant-experts will be removed in v0.12.0.
Default: None
2.1.7.21. --eplb-window-size Copier lienLien copié sur presse-papiers!
This argument is deprecated.
--eplb-window-size will be removed in v0.12.0.
Default: None
2.1.7.22. --eplb-step-interval Copier lienLien copié sur presse-papiers!
This argument is deprecated.
--eplb-step-interval will be removed in v0.12.0.
Default: None
2.1.7.23. --eplb-log-balancedness, --no-eplb-log-balancedness Copier lienLien copié sur presse-papiers!
This argument is deprecated.
--eplb-log-balancedness will be removed in v0.12.0.
Default: None
2.1.7.24. --max-parallel-loading-workers Copier lienLien copié sur presse-papiers!
Maximum number of parallel loading workers when loading model sequentially in multiple batches. To avoid RAM OOM when using tensor parallel and large models.
Default: None
2.1.7.25. --ray-workers-use-nsight, --no-ray-workers-use-nsight Copier lienLien copié sur presse-papiers!
Whether to profile Ray workers with nsight, see https://docs.ray.io/en/latest/ray-observability/user-guides/profiling.html#profiling-nsight-profiler.
Default: False
2.1.7.26. --disable-custom-all-reduce, --no-disable-custom-all-reduce Copier lienLien copié sur presse-papiers!
Disable the custom all-reduce kernel and fall back to NCCL.
Default: False
2.1.7.27. --worker-cls Copier lienLien copié sur presse-papiers!
The full name of the worker class to use. If "auto", the worker class will be determined based on the platform.
Default: auto
2.1.7.28. --worker-extension-cls Copier lienLien copié sur presse-papiers!
The full name of the worker extension class to use. The worker extension class is dynamically inherited by the worker class. This is used to inject new attributes and methods to the worker class for use in collective_rpc calls.
Default: ``
2.1.7.29. --enable-multimodal-encoder-data-parallel Copier lienLien copié sur presse-papiers!
Default: False
2.1.8. CacheConfig Copier lienLien copié sur presse-papiers!
Configuration for the KV cache.
2.1.8.1. --block-size Copier lienLien copié sur presse-papiers!
Possible choices: 1
, 8
, 16
, 32
, 64
, 128
Size of a contiguous cache block in number of tokens. On CUDA devices, only block sizes up to 32 are supported.
This config has no static default. If left unspecified by the user, it will be set in Platform.check_and_update_config()
based on the current platform.
Default: None
2.1.8.2. --gpu-memory-utilization Copier lienLien copié sur presse-papiers!
The fraction of GPU memory to be used for the model executor, which can range from 0 to 1. For example, a value of 0.5 would imply 50%% GPU memory utilization. If unspecified, will use the default value of 0.9. This is a per-instance limit, and only applies to the current vLLM instance. It does not matter if you have another vLLM instance running on the same GPU. For example, if you have two vLLM instances running on the same GPU, you can set the GPU memory utilization to 0.5 for each instance.
Default: 0.9
2.1.8.3. --kv-cache-memory-bytes Copier lienLien copié sur presse-papiers!
Size of KV Cache per GPU in bytes. By default, this is set to None and vllm can automatically infer the kv cache size based on gpu_memory_utilization. However, users may want to manually specify the kv cache memory size. kv_cache_memory_bytes allows more fine-grain control of how much memory gets used when compared with using gpu_memory_memory_utilization. Note that kv_cache_memory_bytes (when not-None) ignores gpu_memory_utilization
Parse human-readable integers like '1k', '2M', etc. Including decimal values with decimal multipliers.
Examples: - '1k' -> 1,000 - '1K' -> 1,024 - '25.6k' -> 25,600
Examples:
- '1k' -> 1,000
- '1K' -> 1,024
- '25.6k' -> 25,600
Default: None
2.1.8.4. --swap-space Copier lienLien copié sur presse-papiers!
Size of the CPU swap space per GPU (in GiB).
Default: 4
2.1.8.5. --kv-cache-dtype Copier lienLien copié sur presse-papiers!
Possible choices: auto
, bfloat16
, fp8
, fp8_e4m3
, fp8_e5m2
, fp8_inc
Data type for kv cache storage. If "auto", will use model data type. CUDA 11.8+ supports fp8 (=fp8_e4m3) and fp8_e5m2. ROCm (AMD GPU) supports fp8 (=fp8_e4m3). Intel Gaudi (HPU) supports fp8 (using fp8_inc). Some models (namely DeepSeekV3.2) default to fp8, set to bfloat16 to use bfloat16 instead, this is an invalid option for models that do not default to fp8.
Default: auto
2.1.8.6. --num-gpu-blocks-override Copier lienLien copié sur presse-papiers!
Number of GPU blocks to use. This overrides the profiled num_gpu_blocks
if specified. Does nothing if None
. Used for testing preemption.
Default: None
2.1.8.7. --enable-prefix-caching, --no-enable-prefix-caching Copier lienLien copié sur presse-papiers!
Whether to enable prefix caching. Enabled by default for V1.
Default: None
2.1.8.8. --prefix-caching-hash-algo Copier lienLien copié sur presse-papiers!
Possible choices: sha256
, sha256_cbor
Set the hash algorithm for prefix caching:
- "sha256" uses Pickle for object serialization before hashing.
- "sha256_cbor" provides a reproducible, cross-language compatible hash. It serializes objects using canonical CBOR and hashes them with SHA-256.
Default: sha256
2.1.8.9. --cpu-offload-gb Copier lienLien copié sur presse-papiers!
The space in GiB to offload to CPU, per GPU. Default is 0, which means no offloading. Intuitively, this argument can be seen as a virtual way to increase the GPU memory size. For example, if you have one 24 GB GPU and set this to 10, virtually you can think of it as a 34 GB GPU. Then you can load a 13B model with BF16 weight, which requires at least 26GB GPU memory. Note that this requires fast CPU-GPU interconnect, as part of the model is loaded from CPU memory to GPU memory on the fly in each model forward pass.
Default: 0
2.1.8.10. --calculate-kv-scales, --no-calculate-kv-scales Copier lienLien copié sur presse-papiers!
This enables dynamic calculation of k_scale
and v_scale
when kv_cache_dtype is fp8. If False
, the scales will be loaded from the model checkpoint if available. Otherwise, the scales will default to 1.0.
Default: False
2.1.8.11. --kv-sharing-fast-prefill, --no-kv-sharing-fast-prefill Copier lienLien copié sur presse-papiers!
This feature is work in progress and no prefill optimization takes place with this flag enabled currently.
In some KV sharing setups, e.g. YOCO (https://arxiv.org/abs/2405.05254), some layers can skip tokens corresponding to prefill. This flag enables attention metadata for eligible layers to be overridden with metadata necessary for implementing this optimization in some models (e.g. Gemma3n)
Default: False
2.1.8.12. --mamba-cache-dtype Copier lienLien copié sur presse-papiers!
Possible choices: auto
, float32
The data type to use for the Mamba cache (both the conv as well as the ssm state). If set to 'auto', the data type will be inferred from the model config.
Default: auto
2.1.8.13. --mamba-ssm-cache-dtype Copier lienLien copié sur presse-papiers!
Possible choices: auto
, float32
The data type to use for the Mamba cache (ssm state only, conv state will still be controlled by mamba_cache_dtype). If set to 'auto', the data type for the ssm state will be determined by mamba_cache_dtype.
Default: auto
2.1.9. MultiModalConfig Copier lienLien copié sur presse-papiers!
Controls the behavior of multimodal models.
2.1.9.1. --limit-mm-per-prompt Copier lienLien copié sur presse-papiers!
The maximum number of input items allowed per prompt for each modality. Defaults to 1 (V0) or 999 (V1) for each modality.
For example, to allow up to 16 images and 2 videos per prompt: {"image": 16, "video": 2}
Should either be a valid JSON string or JSON keys passed individually.
Default: {}
2.1.9.2. --media-io-kwargs Copier lienLien copié sur presse-papiers!
Additional args passed to process media inputs, keyed by modalities. For example, to set num_frames for video, set --media-io-kwargs '{"video": {"num_frames": 40} }'
Should either be a valid JSON string or JSON keys passed individually.
Default: {}
2.1.9.3. --mm-processor-kwargs Copier lienLien copié sur presse-papiers!
Arguments to be forwarded to the model’s processor for multi-modal data, e.g., image processor. Overrides for the multi-modal processor obtained from transformers.AutoProcessor.from_pretrained
.
The available overrides depend on the model that is being run.
For example, for Phi-3-Vision: {"num_crops": 4}
.
Should either be a valid JSON string or JSON keys passed individually.
Default: None
2.1.9.4. --mm-processor-cache-gb Copier lienLien copié sur presse-papiers!
The size (in GiB) of the multi-modal processor cache, which is used to avoid re-processing past multi-modal inputs.
This cache is duplicated for each API process and engine core process, resulting in a total memory usage of mm_processor_cache_gb * (api_server_count + data_parallel_size)
.
Set to 0
to disable this cache completely (not recommended).
Default: 4
2.1.9.5. --disable-mm-preprocessor-cache Copier lienLien copié sur presse-papiers!
Default: False
2.1.9.6. --mm-processor-cache-type Copier lienLien copié sur presse-papiers!
Possible choices: lru
, shm
Type of cache to use for the multi-modal preprocessor/mapper. If shm
, use shared memory FIFO cache. If lru
, use mirrored LRU cache.
Default: lru
2.1.9.7. --mm-shm-cache-max-object-size-mb Copier lienLien copié sur presse-papiers!
Size limit (in MiB) for each object stored in the multi-modal processor shared memory cache. Only effective when mm_processor_cache_type
is "shm"
.
Default: 128
2.1.9.8. --mm-encoder-tp-mode Copier lienLien copié sur presse-papiers!
Possible choices: data
, weights
Indicates how to optimize multi-modal encoder inference using tensor parallelism (TP).
-
"weights"
: Within the same vLLM engine, split the weights of each layer across TP ranks. (default TP behavior) -
"data"
: Within the same vLLM engine, split the batched input data across TP ranks to process the data in parallel, while hosting the full weights on each TP rank. This batch-level DP is not to be confused with API request-level DP (which is controlled by--data-parallel-size
). This is only supported on a per-model basis and falls back to"weights"
if the encoder does not support DP.
Default: weights
2.1.9.9. --interleave-mm-strings, --no-interleave-mm-strings Copier lienLien copié sur presse-papiers!
Enable fully interleaved support for multimodal prompts, while using --chat-template-content-format=string.
Default: False
2.1.9.10. --skip-mm-profiling, --no-skip-mm-profiling Copier lienLien copié sur presse-papiers!
When enabled, skips multimodal memory profiling and only profiles with language backbone model during engine initialization.
This reduces engine startup time but shifts the responsibility to users for estimating the peak memory usage of the activation of multimodal encoder and embedding cache.
Default: False
2.1.9.11. --video-pruning-rate Copier lienLien copié sur presse-papiers!
Sets pruning rate for video pruning via Efficient Video Sampling. Value sits in range [0;1) and determines fraction of media tokens from each video to be pruned.
Default: None
2.1.10. LoRAConfig Copier lienLien copié sur presse-papiers!
Configuration for LoRA.
2.1.10.1. --enable-lora, --no-enable-lora Copier lienLien copié sur presse-papiers!
If True, enable handling of LoRA adapters.
Default: None
2.1.10.2. --enable-lora-bias, --no-enable-lora-bias Copier lienLien copié sur presse-papiers!
This argument is deprecated.
Enable bias for LoRA adapters. This option will be removed in v0.12.0.
Default: False
2.1.10.3. --max-loras Copier lienLien copié sur presse-papiers!
Max number of LoRAs in a single batch.
Default: 1
2.1.10.4. --max-lora-rank Copier lienLien copié sur presse-papiers!
Max LoRA rank.
Default: 16
2.1.10.5. --lora-extra-vocab-size Copier lienLien copié sur presse-papiers!
(Deprecated) Maximum size of extra vocabulary that can be present in a LoRA adapter. Will be removed in v0.12.0.
Default: 256
2.1.10.6. --lora-dtype Copier lienLien copié sur presse-papiers!
Possible choices: auto
, bfloat16
, float16
Data type for LoRA. If auto, will default to base model dtype.
Default: auto
2.1.10.7. --max-cpu-loras Copier lienLien copié sur presse-papiers!
Maximum number of LoRAs to store in CPU memory. Must be >= than max_loras
.
Default: None
2.1.10.8. --fully-sharded-loras, --no-fully-sharded-loras Copier lienLien copié sur presse-papiers!
By default, only half of the LoRA computation is sharded with tensor parallelism. Enabling this will use the fully sharded layers. At high sequence length, max rank or tensor parallel size, this is likely faster.
Default: False
2.1.10.9. --default-mm-loras Copier lienLien copié sur presse-papiers!
Dictionary mapping specific modalities to LoRA model paths; this field is only applicable to multimodal models and should be leveraged when a model always expects a LoRA to be active when a given modality is present. Note that currently, if a request provides multiple additional modalities, each of which have their own LoRA, we do NOT apply default_mm_loras because we currently only support one lora adapter per prompt. When run in offline mode, the lora IDs for n modalities will be automatically assigned to 1-n with the names of the modalities in alphabetic order.
Should either be a valid JSON string or JSON keys passed individually.
Default: None
2.1.11. ObservabilityConfig Copier lienLien copié sur presse-papiers!
Configuration for observability - metrics and tracing.
2.1.11.2. --otlp-traces-endpoint Copier lienLien copié sur presse-papiers!
Target URL to which OpenTelemetry traces will be sent.
Default: None
2.1.11.3. --collect-detailed-traces Copier lienLien copié sur presse-papiers!
Possible choices: all
, model
, worker
, None
, model,worker
, model,all
, worker,model
, worker,all
, all,model
, all,worker
It makes sense to set this only if --otlp-traces-endpoint
is set. If set, it will collect detailed traces for the specified modules. This involves use of possibly costly and or blocking operations and hence might have a performance impact.
Note that collecting detailed timing information for each request can be expensive.
Default: None
2.1.12. SchedulerConfig Copier lienLien copié sur presse-papiers!
Scheduler configuration.
2.1.12.1. --max-num-batched-tokens Copier lienLien copié sur presse-papiers!
Maximum number of tokens to be processed in a single iteration.
This config has no static default. If left unspecified by the user, it will be set in EngineArgs.create_engine_config
based on the usage context.
Parse human-readable integers like '1k', '2M', etc. Including decimal values with decimal multipliers.
Examples: - '1k' -> 1,000 - '1K' -> 1,024 - '25.6k' -> 25,600
Examples:
- '1k' -> 1,000
- '1K' -> 1,024
- '25.6k' -> 25,600
Default: None
2.1.12.2. --max-num-seqs Copier lienLien copié sur presse-papiers!
Maximum number of sequences to be processed in a single iteration.
This config has no static default. If left unspecified by the user, it will be set in EngineArgs.create_engine_config
based on the usage context.
Default: None
2.1.12.3. --max-num-partial-prefills Copier lienLien copié sur presse-papiers!
For chunked prefill, the maximum number of sequences that can be partially prefilled concurrently.
Default: 1
2.1.12.4. --max-long-partial-prefills Copier lienLien copié sur presse-papiers!
For chunked prefill, the maximum number of prompts longer than long_prefill_token_threshold that will be prefilled concurrently. Setting this less than max_num_partial_prefills will allow shorter prompts to jump the queue in front of longer prompts in some cases, improving latency.
Default: 1
2.1.12.5. --cuda-graph-sizes Copier lienLien copié sur presse-papiers!
Cuda graph capture sizes
- if none provided, then default set to [min(max_num_seqs * 2, 512)]
- if one value is provided, then the capture list would follow the pattern: [1, 2, 4] + [i for i in range(8, cuda_graph_sizes + 1, 8)]
- more than one value (e.g. 1 2 128) is provided, then the capture list will follow the provided list.
Default: []
2.1.12.6. --long-prefill-token-threshold Copier lienLien copié sur presse-papiers!
For chunked prefill, a request is considered long if the prompt is longer than this number of tokens.
Default: 0
2.1.12.7. --num-lookahead-slots Copier lienLien copié sur presse-papiers!
The number of slots to allocate per sequence per step, beyond the known token ids. This is used in speculative decoding to store KV activations of tokens which may or may not be accepted.
This will be replaced by speculative config in the future; it is present to enable correctness tests until then.
Default: 0
2.1.12.8. --scheduling-policy Copier lienLien copié sur presse-papiers!
Possible choices: fcfs
, priority
The scheduling policy to use:
- "fcfs" means first come first served, i.e. requests are handled in order of arrival.
- "priority" means requests are handled based on given priority (lower value means earlier handling) and time of arrival deciding any ties).
Default: fcfs
2.1.12.9. --enable-chunked-prefill, --no-enable-chunked-prefill Copier lienLien copié sur presse-papiers!
If True, prefill requests can be chunked based on the remaining max_num_batched_tokens.
Default: None
2.1.12.10. --disable-chunked-mm-input, --no-disable-chunked-mm-input Copier lienLien copié sur presse-papiers!
If set to true and chunked prefill is enabled, we do not want to partially schedule a multimodal item. Only used in V1 This ensures that if a request has a mixed prompt (like text tokens TTTT followed by image tokens IIIIIIIIII) where only some image tokens can be scheduled (like TTTTIIIII, leaving IIIII), it will be scheduled as TTTT in one step and IIIIIIIIII in the next.
Default: False
2.1.12.11. --scheduler-cls Copier lienLien copié sur presse-papiers!
The scheduler class to use. "vllm.core.scheduler.Scheduler" is the default scheduler. Can be a class directly or the path to a class of form "mod.custom_class".
Default: vllm.core.scheduler.Scheduler
2.1.12.12. --disable-hybrid-kv-cache-manager, --no-disable-hybrid-kv-cache-manager Copier lienLien copié sur presse-papiers!
If set to True, KV cache manager will allocate the same size of KV cache for all attention layers even if there are multiple type of attention layers like full attention and sliding window attention.
Default: False
2.1.12.13. --async-scheduling, --no-async-scheduling Copier lienLien copié sur presse-papiers!
This is an experimental feature.
If set to True, perform async scheduling. This may help reduce the CPU overheads, leading to better latency and throughput. However, async scheduling is currently not supported with some features such as structured outputs, speculative decoding, and pipeline parallelism.
Default: False
2.1.13. VllmConfig Copier lienLien copié sur presse-papiers!
Dataclass which contains all vllm-related configuration. This simplifies passing around the distinct configurations in the codebase.
2.1.13.1. --speculative-config Copier lienLien copié sur presse-papiers!
Speculative decoding configuration.
Should either be a valid JSON string or JSON keys passed individually.
Default: None
2.1.13.2. --kv-transfer-config Copier lienLien copié sur presse-papiers!
The configurations for distributed KV cache transfer.
Should either be a valid JSON string or JSON keys passed individually.
Default: None
2.1.13.3. --kv-events-config Copier lienLien copié sur presse-papiers!
The configurations for event publishing.
Should either be a valid JSON string or JSON keys passed individually.
Default: None
2.1.13.4. --compilation-config, -O Copier lienLien copié sur presse-papiers!
torch.compile
and cudagraph capture configuration for the model.
As a shorthand, -O<n>
can be used to directly specify the compilation level n
: -O3
is equivalent to -O.level=3
(same as -O='{"level":3}'
). Currently, -O <n>and -O=<n>are supported as well but this will likely be removed in favor of clearer -O<n>syntax in the future.</n></n></n>
level 0 is the default level without any optimization. level 1 and 2 are for internal testing only. level 3 is the recommended level for production, also default in V1.
You can specify the full compilation config like so: {"level": 3, "cudagraph_capture_sizes": [1, 2, 4, 8]}
Should either be a valid JSON string or JSON keys passed individually.
Default:
2.1.13.5. --additional-config Copier lienLien copié sur presse-papiers!
Additional config for specified platform. Different platforms may support different configs. Make sure the configs are valid for the platform you are using. Contents must be hashable.
Default: {}
2.1.13.6. --structured-outputs-config Copier lienLien copié sur presse-papiers!
Structured outputs configuration.
Should either be a valid JSON string or JSON keys passed individually.
Default:
2.2. vllm chat arguments Copier lienLien copié sur presse-papiers!
Generate chat completions with the running API server.
vllm chat [options]
$ vllm chat [options]
- --api-key API_KEY
OpenAI API key. If provided, this API key overrides the API key set in the environment variables.
Default: None
- --model-name MODEL_NAME
The model name used in prompt completion, defaults to the first model in list models API call.
Default: None
- --system-prompt SYSTEM_PROMPT
The system prompt to be added to the chat template, used for models that support system prompts.
Default: None
- --url URL
URL of the running OpenAI-compatible RESTful API server
Default:
http://localhost:8000/v1
- -q MESSAGE, --quick MESSAGE
Send a single prompt as
MESSAGE
and print the response, then exit.Default: None
2.3. vllm complete arguments Copier lienLien copié sur presse-papiers!
Generate text completions based on the given prompt with the running API server.
vllm complete [options]
$ vllm complete [options]
- --api-key API_KEY
API key for OpenAI services. If provided, this API key overrides the API key set in the environment variables.
Default: None
- --model-name MODEL_NAME
The model name used in prompt completion, defaults to the first model in list models API call.
Default: None
- --url URL
URL of the running OpenAI-compatible RESTful API server
Default:
http://localhost:8000/v1
- -q PROMPT, --quick PROMPT
Send a single prompt and print the completion output, then exit.
Default: None
2.4. vllm bench arguments Copier lienLien copié sur presse-papiers!
Benchmark online serving throughput.
vllm bench [options]
$ vllm bench [options]
- bench
Positional arguments:
-
latency
- Benchmarks the latency of a single batch of requests. -
serve
- Benchmarks the online serving throughput. -
throughput
- Benchmarks offline inference throughput.
-
2.5. vllm collect-env arguments Copier lienLien copié sur presse-papiers!
Collect environment information.
vllm collect-env
$ vllm collect-env
2.6. vllm run-batch arguments Copier lienLien copié sur presse-papiers!
Run batch inference jobs for the specified model.
vllm run-batch
$ vllm run-batch
- --disable-log-requests
Disable logging requests.
Default: False
- --disable-log-stats
Disable logging statistics.
Default: False
- --enable-metrics
Enables Prometheus metrics.
Default: False
- --enable-prompt-tokens-details
Enables
prompt_tokens_details
in usage when set to True.Default: False
- --max-log-len MAX_LOG_LEN
Maximum number of prompt characters or prompt ID numbers printed in the log.
Default: Unlimited
- --output-tmp-dir OUTPUT_TMP_DIR
The directory to store the output file before uploading it to the output URL.
Default: None
- --port PORT
Port number for the Prometheus metrics server. Only needed if
enable-metrics
is set.Default: 8000
- --response-role RESPONSE_ROLE
The role name to return if
request.add_generation_prompt=True
.Default: assistant
- --url URL
Prometheus metrics server URL. Only required if
enable-metrics
is set).Default: 0.0.0.0
- --use-v2-block-manager
DEPRECATED. Block manager v1 has been removed.
SelfAttnBlockSpaceManager
(block manager v2) is now the default. Setting--use-v2-block-manager
flag to True or False has no effect on vLLM behavior.Default: True
- -i INPUT_FILE, --input-file INPUT_FILE
The path or URL to a single input file. Supports local file paths and HTTP or HTTPS. If a URL is specified, the file should be available using HTTP GET.
Default: None
- -o OUTPUT_FILE, --output-file OUTPUT_FILE
The path or URL to a single output file. Supports local file paths and HTTP or HTTPS. If a URL is specified, the file should be available using HTTP PUT.
Default: None