Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

vLLM server arguments


Red Hat AI Inference Server 3.2

Server arguments for running Red Hat AI Inference Server

Red Hat AI Documentation Team

Abstract

Learn how to configure and run Red Hat AI Inference Server.

Preface

Red Hat AI Inference Server provides an OpenAI-compatible API server for inference serving. You can control the behavior of the server with arguments.

Chapter 1. Key vLLM server arguments

There are 4 key arguments that you use to configure AI Inference Server to run on your hardware:

  1. --tensor-parallel-size: distributes your model across your host GPUs.
  2. --gpu-memory-utilization: adjusts accelerator memory utilization for model weights, activations, and KV cache. Measured as a fraction from 0.0 to 1.0 that defaults to 0.9. For example, you can set this value to 0.8 to limit GPU memory consumption by AI Inference Server to 80%. Use the largest value that is stable for your deployment to maximize throughput.
  3. --max-model-len: limits the maximum context length of the model, measured in tokens. Set this to prevent problems with memory if the model’s default context length is too long.
  4. --max-num-batched-tokens: limits the maximum batch size of tokens to process per step, measured in tokens. Increasing this improves throughput but can affect output token latency.

For example, to run the Red Hat AI Inference Server container and serve a model with vLLM, run the following, changing server arguments as required:

$ podman run --rm -it \
--device nvidia.com/gpu=all \
--security-opt=label=disable \
--shm-size=4GB -p 8000:8000 \
--userns=keep-id:uid=1001 \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
--env "HF_HUB_OFFLINE=0" \
--env=VLLM_NO_USAGE_STATS=1 \
-v ./rhaiis-cache:/opt/app-root/src/.cache \
registry.redhat.io/rhaiis/vllm-cuda-rhel9:3.2 \
--model RedHatAI/Llama-3.2-1B-Instruct-FP8 \
--tensor-parallel-size 2 \
--gpu-memory-utilization 0.8 \
--max-model-len 16384 \
--max-num-batched-tokens 2048 \
Copy to Clipboard Toggle word wrap

Chapter 2. vLLM server usage

$ vllm [-h] [-v] {chat,complete,serve,bench,collect-env,run-batch}
Copy to Clipboard Toggle word wrap
chat
Generate chat completions via the running API server.
complete
Generate text completions based on the given prompt via the running API server.
serve
Start the vLLM OpenAI Compatible API server.
bench
vLLM bench subcommand.
collect-env
Start collecting environment information.
run-batch
Run batch prompts and write results to file.

2.1. vllm serve arguments

vllm serve launches a local server that loads and serves the language model.

2.1.1. JSON CLI arguments

  • --json-arg '{"key1": "value1", "key2": {"key3": "value2"}}'
  • --json-arg.key1 value1 --json-arg.key2.key3 value2

Additionally, list elements can be passed individually using +:

  • --json-arg '{"key4": ["value3", "value4", "value5"]}'
  • --json-arg.key4+ value3 --json-arg.key4+='value4,value5'

2.1.2. Options

2.1.2.1. --headless

Run in headless mode. See multi-node data parallel documentation for more details.

Default: False

2.1.2.2. --api-server-count, -asc

How many API server processes to run.

Default: 1

2.1.2.3. --config

Read CLI options from a config file. Must be a YAML with the following options: https://docs.vllm.ai/en/latest/configuration/serve_args.html

Default: None

2.1.2.4. --disable-log-stats

Disable logging statistics.

Default: False

2.1.2.5. --enable-prompt-adapter
Important

This argument is deprecated.

Prompt adapter has been removed. Setting this flag to True or False has no effect on vLLM behavior.

Default: False

2.1.2.6. --enable-log-requests, --no-enable-log-requests

Enable logging requests.

Default: False

2.1.2.7. --disable-log-requests, --no-disable-log-requests
Important

This argument is deprecated.

Disable logging requests.

Default: True

2.1.3. Frontend

Arguments for the OpenAI-compatible frontend server.

2.1.3.1. --host

Host name.

Default: None

2.1.3.2. --port

Port number.

Default: 8000

2.1.3.3. --uds

Unix domain socket path. If set, host and port arguments are ignored.

Default: None

2.1.3.4. --uvicorn-log-level

Possible choices: critical, debug, error, info, trace, warning

Log level for uvicorn.

Default: info

2.1.3.5. --disable-uvicorn-access-log, --no-disable-uvicorn-access-log

Disable uvicorn access log.

Default: False

2.1.3.6. --allow-credentials, --no-allow-credentials

Allow credentials.

Default: False

2.1.3.7. --allowed-origins

Allowed origins.

Default: ['*']

2.1.3.8. --allowed-methods

Allowed methods.

Default: ['*']

2.1.3.9. --allowed-headers

Allowed headers.

Default: ['*']

2.1.3.10. --api-key

If provided, the server will require one of these keys to be presented in the header.

Default: None

2.1.3.11. --lora-modules

LoRA modules configurations in either 'name=path' format or JSON format or JSON list format. Example (old format): 'name=path' Example (new format): {"name": "name", "path": "lora_path", "base_model_name": "id"}

Default: None

2.1.3.12. --chat-template

The file path to the chat template, or the template in single-line form for the specified model.

Default: None

2.1.3.13. --chat-template-content-format

Possible choices: auto, openai, string

The format to render message content within a chat template.

  • "string" will render the content as a string. Example: "Hello World"
  • "openai" will render the content as a list of dictionaries, similar to OpenAI schema. Example: [{"type": "text", "text": "Hello world!"}]

Default: auto

2.1.3.14. --response-role

The role name to return if request.add_generation_prompt=true.

Default: assistant

2.1.3.15. --ssl-keyfile

The file path to the SSL key file.

Default: None

2.1.3.16. --ssl-certfile

The file path to the SSL cert file.

Default: None

2.1.3.17. --ssl-ca-certs

The CA certificates file.

Default: None

2.1.3.18. --enable-ssl-refresh, --no-enable-ssl-refresh

Refresh SSL Context when SSL certificate files change

Default: False

2.1.3.19. --ssl-cert-reqs

Whether client certificate is required (see stdlib ssl module’s).

Default: 0

2.1.3.20. --root-path

FastAPI root_path when app is behind a path based routing proxy.

Default: None

2.1.3.21. --middleware

Additional ASGI middleware to apply to the app. We accept multiple --middleware arguments. The value should be an import path. If a function is provided, vLLM will add it to the server using @app.middleware('http'). If a class is provided, vLLM will add it to the server using app.add_middleware().

Default: []

2.1.3.22. --return-tokens-as-token-ids, --no-return-tokens-as-token-ids

When --max-logprobs is specified, represents single tokens as strings of the form 'token_id:{token_id}' so that tokens that are not JSON-encodable can be identified.

Default: False

If specified, will run the OpenAI frontend server in the same process as the model serving engine.

Default: False

2.1.3.24. --enable-request-id-headers, --no-enable-request-id-headers

If specified, API server will add X-Request-Id header to responses. Caution: this hurts performance at high QPS.

Default: False

2.1.3.25. --enable-auto-tool-choice, --no-enable-auto-tool-choice

If specified, exclude tool definitions in prompts when tool_choice='none'.

Default: False

Enable auto tool choice for supported models. Use --tool-call-parser to specify which parser to use.

Default: False

2.1.3.27. --tool-call-parser

Select the tool call parser depending on the model that you’re using. This is used to parse the model-generated tool call into OpenAI API format. Required for --enable-auto-tool-choice. You can choose any option from the built-in parsers or register a plugin via --tool-parser-plugin.

Default: None

2.1.3.28. --tool-parser-plugin

Special the tool parser plugin write to parse the model-generated tool into OpenAI API format, the name register in this plugin can be used in --tool-call-parser.

Default: ``

2.1.3.29. --tool-server

Comma-separated list of host:port pairs (IPv4, IPv6, or hostname). Examples: 127.0.0.1:8000, [::1]:8000, localhost:1234. Or demo for demo purpose.

Default: None

2.1.3.30. --log-config-file

Path to logging config JSON file for both vllm and uvicorn

Default: None

2.1.3.31. --max-log-len

Max number of prompt characters or prompt ID numbers being printed in log. The default of None means unlimited.

Default: None

2.1.3.32. --disable-fastapi-docs, --no-disable-fastapi-docs

Disable FastAPI’s OpenAPI schema, Swagger UI, and ReDoc endpoint.

Default: False

If set to True, enable prompt_tokens_details in usage.

Default: False

If set to True, enable tracking server_load_metrics in the app state.

Default: False

2.1.3.35. --enable-force-include-usage, --no-enable-force-include-usage

If set to True, including usage on every request.

Default: False

Enable the /get_tokenizer_info endpoint. May expose chat templates and other tokenizer configuration.

Default: False

2.1.3.37. --enable-log-outputs, --no-enable-log-outputs

If set to True, enable logging of model outputs (generations) in addition to the input logging that is enabled by default.

Default: False

2.1.3.38. --h11-max-incomplete-event-size

Maximum size (bytes) of an incomplete HTTP event (header or body) for h11 parser. Helps mitigate header abuse. Default: 4194304 (4 MB).

Default: 4194304

2.1.3.39. --h11-max-header-count

Maximum number of HTTP headers allowed in a request for h11 parser. Helps mitigate header abuse. Default: 256.

Default: 256

2.1.4. ModelConfig

Configuration for the model.

2.1.4.1. --model

Name or path of the Hugging Face model to use. It is also used as the content for model_name tag in metrics output when served_model_name is not specified.

Default: Qwen/Qwen3-0.6B

2.1.4.2. --runner

Possible choices: auto, draft, generate, pooling

The type of model runner to use. Each vLLM instance only supports one model runner, even if the same model can be used for multiple types.

Default: auto

2.1.4.3. --convert

Possible choices: auto, classify, embed, none, reward

Convert the model using adapters defined in [vllm.model_executor.models.adapters][]. The most common use case is to adapt a text generation model to be used for pooling tasks.

Default: auto

2.1.4.4. --task

Possible choices: auto, classify, draft, embed, embedding, generate, reward, score, transcription, None

Important

This argument is deprecated.

The task to use the model for. If the model supports more than one model runner, this is used to select which model runner to run.

Note that the model may support other tasks using the same model runner.

Default: None

2.1.4.5. --tokenizer

Name or path of the Hugging Face tokenizer to use. If unspecified, model name or path will be used.

Default: None

2.1.4.6. --tokenizer-mode

Possible choices: auto, custom, mistral, slow

Tokenizer mode:

  • "auto" will use the fast tokenizer if available.
  • "slow" will always use the slow tokenizer.
  • "mistral" will always use the tokenizer from mistral_common.
  • "custom" will use --tokenizer to select the preregistered tokenizer.

Default: auto

2.1.4.7. --trust-remote-code, --no-trust-remote-code

Trust remote code (e.g., from HuggingFace) when downloading the model and tokenizer.

Default: False

2.1.4.8. --dtype

Possible choices: auto, bfloat16, float, float16, float32, half

Data type for model weights and activations:

  • "auto" will use FP16 precision for FP32 and FP16 models, and BF16 precision for BF16 models.
  • "half" for FP16. Recommended for AWQ quantization.
  • "float16" is the same as "half".
  • "bfloat16" for a balance between precision and range.
  • "float" is shorthand for FP32 precision.
  • "float32" for FP32 precision.

Default: auto

2.1.4.9. --seed

Random seed for reproducibility. Initialized to None in V0, but initialized to 0 in V1.

Default: None

2.1.4.10. --hf-config-path

Name or path of the Hugging Face config to use. If unspecified, model name or path will be used.

Default: None

2.1.4.11. --allowed-local-media-path

Allowing API requests to read local images or videos from directories specified by the server file system. This is a security risk. Should only be enabled in trusted environments.

Default: ``

2.1.4.12. --revision

The specific model version to use. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.

Default: None

2.1.4.13. --code-revision

The specific revision to use for the model code on the Hugging Face Hub. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.

Default: None

2.1.4.14. --rope-scaling

RoPE scaling configuration. For example, {"rope_type":"dynamic","factor":2.0}.

Should either be a valid JSON string or JSON keys passed individually.

Default: {}

2.1.4.15. --rope-theta

RoPE theta. Use with rope_scaling. In some cases, changing the RoPE theta improves the performance of the scaled model.

Default: None

2.1.4.16. --tokenizer-revision

The specific revision to use for the tokenizer on the Hugging Face Hub. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.

Default: None

2.1.4.17. --max-model-len

Model context length (prompt and output). If unspecified, will be automatically derived from the model config.

When passing via --max-model-len, supports k/m/g/K/M/G in human-readable format. Examples:

  • 1k -> 1000
  • 1K -> 1024
  • 25.6k -> 25,600

Default: None

2.1.4.18. --quantization, -q

Method used to quantize the weights. If None, we first check the quantization_config attribute in the model config file. If that is None, we assume the model weights are not quantized and use dtype to determine the data type of the weights.

Default: None

2.1.4.19. --enforce-eager, --no-enforce-eager

Whether to always use eager-mode PyTorch. If True, we will disable CUDA graph and always execute the model in eager mode. If False, we will use CUDA graph and eager execution in hybrid for maximal performance and flexibility.

Default: False

2.1.4.20. --max-seq-len-to-capture

Maximum sequence len covered by CUDA graphs. When a sequence has context length larger than this, we fall back to eager mode. Additionally for encoder-decoder models, if the sequence length of the encoder input is larger than this, we fall back to the eager mode.

Default: 8192

2.1.4.21. --max-logprobs

Maximum number of log probabilities to return when logprobs is specified in SamplingParams. The default value comes the default for the OpenAI Chat Completions API. -1 means no cap, i.e. all (output_length * vocab_size) logprobs are allowed to be returned and it may cause OOM.

Default: 20

2.1.4.22. --logprobs-mode

Possible choices: processed_logits, processed_logprobs, raw_logits, raw_logprobs

Indicates the content returned in the logprobs and prompt_logprobs. Supported mode: 1) raw_logprobs, 2) processed_logprobs, 3) raw_logits, 4) processed_logits. Raw means the values before applying logit processors, like bad words. Processed means the values after applying such processors.

Default: raw_logprobs

2.1.4.23. --disable-sliding-window, --no-disable-sliding-window

Whether to disable sliding window. If True, we will disable the sliding window functionality of the model, capping to sliding window size. If the model does not support sliding window, this argument is ignored.

Default: False

2.1.4.24. --disable-cascade-attn, --no-disable-cascade-attn

Disable cascade attention for V1. While cascade attention does not change the mathematical correctness, disabling it could be useful for preventing potential numerical issues. Note that even if this is set to False, cascade attention will be only used when the heuristic tells that it’s beneficial.

Default: False

2.1.4.25. --skip-tokenizer-init, --no-skip-tokenizer-init

Skip initialization of tokenizer and detokenizer. Expects valid prompt_token_ids and None for prompt from the input. The generated output will contain token ids.

Default: False

2.1.4.26. --enable-prompt-embeds, --no-enable-prompt-embeds

If True, enables passing text embeddings as inputs via the prompt_embeds key. Note that enabling this will double the time required for graph compilation.

Default: False

2.1.4.27. --served-model-name

The model name(s) used in the API. If multiple names are provided, the server will respond to any of the provided names. The model name in the model field of a response will be the first name in this list. If not specified, the model name will be the same as the --model argument. Noted that this name(s) will also be used in model_name tag content of prometheus metrics, if multiple names provided, metrics tag will take the first one.

Default: None

2.1.4.28. --disable-async-output-proc

Disable async output processing. This may result in lower performance.

Default: False

2.1.4.29. --config-format

Possible choices: auto, hf, mistral

The format of the model config to load:

  • "auto" will try to load the config in hf format if available else it will try to load in mistral format.
  • "hf" will load the config in hf format.
  • "mistral" will load the config in mistral format.

Default: auto

2.1.4.30. --hf-token

The token to use as HTTP bearer authorization for remote files . If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).

Default: None

2.1.4.31. --hf-overrides

If a dictionary, contains arguments to be forwarded to the Hugging Face config. If a callable, it is called to update the HuggingFace config.

Default: {}

2.1.4.32. --override-neuron-config

Initialize non-default neuron config or override default neuron config that are specific to Neuron devices, this argument will be used to configure the neuron config that can not be gathered from the vllm arguments. e.g. {"cast_logits_dtype": "bfloat16"}.

Should either be a valid JSON string or JSON keys passed individually.

Default: {}

2.1.4.33. --override-pooler-config

Initialize non-default pooling config or override default pooling config for the pooling model. e.g. {"pooling_type": "mean", "normalize": false}.

Default: None

2.1.4.34. --logits-processor-pattern

Optional regex pattern specifying valid logits processor qualified names that can be passed with the logits_processors extra completion argument. Defaults to None, which allows no processors.

Default: None

2.1.4.35. --generation-config

The folder path to the generation config. Defaults to "auto", the generation config will be loaded from model path. If set to "vllm", no generation config is loaded, vLLM defaults will be used. If set to a folder path, the generation config will be loaded from the specified folder path. If max_new_tokens is specified in generation config, then it sets a server-wide limit on the number of output tokens for all requests.

Default: auto

2.1.4.36. --override-generation-config

Overrides or sets generation config. e.g. {"temperature": 0.5}. If used with --generation-config auto, the override parameters will be merged with the default config from the model. If used with --generation-config vllm, only the override parameters are used.

Should either be a valid JSON string or JSON keys passed individually.

Default: {}

2.1.4.37. --enable-sleep-mode, --no-enable-sleep-mode

Enable sleep mode for the engine (only cuda platform is supported).

Default: False

2.1.4.38. --model-impl

Possible choices: auto, vllm, transformers

Which implementation of the model to use:

  • "auto" will try to use the vLLM implementation, if it exists, and fall back to the Transformers implementation if no vLLM implementation is available.
  • "vllm" will use the vLLM model implementation.
  • "transformers" will use the Transformers model implementation.

Default: auto

2.1.4.39. --override-attention-dtype

Override dtype for attention

Default: None

2.1.4.40. --logits-processors

One or more logits processors' fully-qualified class names or class definitions

Default: None

2.1.5. LoadConfig

Configuration for loading the model weights.

2.1.5.1. --load-format

The format of the model weights to load:

  • "auto" will try to load the weights in the safetensors format and fall back to the pytorch bin format if safetensors format is not available.
  • "pt" will load the weights in the pytorch bin format.
  • "safetensors" will load the weights in the safetensors format.
  • "npcache" will load the weights in pytorch format and store a numpy cache to speed up the loading.
  • "dummy" will initialize the weights with random values, which is mainly for profiling.
  • "tensorizer" will use CoreWeave’s tensorizer library for fast weight loading. See the Tensorize vLLM Model script in the Examples section for more information.
  • "runai_streamer" will load the Safetensors weights using Run:ai Model Streamer.
  • "bitsandbytes" will load the weights using bitsandbytes quantization.
  • "sharded_state" will load weights from pre-sharded checkpoint files, supporting efficient loading of tensor-parallel models.
  • "gguf" will load weights from GGUF format files (details specified in https://github.com/ggml-org/ggml/blob/master/docs/gguf.md).
  • "mistral" will load weights from consolidated safetensors files used by Mistral models.
  • Other custom values can be supported via plugins.

Default: auto

2.1.5.2. --download-dir

Directory to download and load the weights, default to the default cache directory of Hugging Face.

Default: None

2.1.5.3. --model-loader-extra-config

Extra config for model loader. This will be passed to the model loader corresponding to the chosen load_format.

Default: {}

2.1.5.4. --ignore-patterns

The list of patterns to ignore when loading the model. Default to "original/*/" to avoid repeated loading of llama’s checkpoints.

Default: None

2.1.5.5. --use-tqdm-on-load, --no-use-tqdm-on-load

Whether to enable tqdm for showing progress bar when loading model weights.

Default: True

2.1.5.6. --pt-load-map-location

pt_load_map_location: the map location for loading pytorch checkpoint, to support loading checkpoints can only be loaded on certain devices like "cuda", this is equivalent to {"": "cuda"}. Another supported format is mapping from different devices like from GPU 1 to GPU 0: {"cuda:1": "cuda:0"}. Note that when passed from command line, the strings in dictionary needs to be double quoted for json parsing. For more details, see original doc for map_location in https://pytorch.org/docs/stable/generated/torch.load.html

Default: cpu

2.1.6. DecodingConfig

Dataclass which contains the decoding strategy of the engine.

2.1.6.1. --guided-decoding-backend

Possible choices: auto, guidance, outlines, xgrammar

Which engine will be used for guided decoding (JSON schema / regex etc) by default. With "auto", we will make opinionated choices based on request contents and what the backend libraries currently support, so the behavior is subject to change in each release.

Default: auto

If True, vLLM will not fallback to a different backend on error.

Default: False

If True, the model will not generate any whitespace during guided decoding. This is only supported for xgrammar and guidance backends.

Default: False

If True, the guidance backend will not use additionalProperties in the JSON schema. This is only supported for the guidance backend and is used to better align its behaviour with outlines and xgrammar.

Default: False

2.1.6.5. --reasoning-parser

Possible choices: deepseek_r1, glm45, GptOss, granite, hunyuan_a13b, mistral, qwen3, step3

Select the reasoning parser depending on the model that you’re using. This is used to parse the reasoning content into OpenAI API format.

Default: ``

2.1.7. ParallelConfig

Configuration for the distributed execution.

2.1.7.1. --distributed-executor-backend

Possible choices: external_launcher, mp, ray, uni, None

Backend to use for distributed model workers, either "ray" or "mp" (multiprocessing). If the product of pipeline_parallel_size and tensor_parallel_size is less than or equal to the number of GPUs available, "mp" will be used to keep processing on a single host. Otherwise, this will default to "ray" if Ray is installed and fail otherwise. Note that tpu only support Ray for distributed inference.

Default: None

2.1.7.2. --pipeline-parallel-size, -pp

Number of pipeline parallel groups.

Default: 1

2.1.7.3. --tensor-parallel-size, -tp

Number of tensor parallel groups.

Default: 1

2.1.7.4. --data-parallel-size, -dp

Number of data parallel groups. MoE layers will be sharded according to the product of the tensor parallel size and data parallel size.

Default: 1

2.1.7.5. --data-parallel-rank, -dpn

Data parallel rank of this instance. When set, enables external load balancer mode.

Default: None

2.1.7.6. --data-parallel-start-rank, -dpr

Starting data parallel rank for secondary nodes.

Default: None

2.1.7.7. --data-parallel-size-local, -dpl

Number of data parallel replicas to run on this node.

Default: None

2.1.7.8. --data-parallel-address, -dpa

Address of data parallel cluster head-node.

Default: None

2.1.7.9. --data-parallel-rpc-port, -dpp

Port for data parallel RPC communication.

Default: None

2.1.7.10. --data-parallel-backend, -dpb

Backend for data parallel, either "mp" or "ray".

Default: mp

2.1.7.11. --data-parallel-hybrid-lb, --no-data-parallel-hybrid-lb

Whether to use "hybrid" DP LB mode. Applies only to online serving and when data_parallel_size > 0. Enables running an AsyncLLM and API server on a "per-node" basis where vLLM load balances between local data parallel ranks, but an external LB balances between vLLM nodes/replicas. Set explicitly in conjunction with --data-parallel-start-rank.

Default: False

2.1.7.12. --enable-expert-parallel, --no-enable-expert-parallel

Use expert parallelism instead of tensor parallelism for MoE layers.

Default: False

2.1.7.13. --enable-eplb, --no-enable-eplb

Enable expert parallelism load balancing for MoE layers.

Default: False

2.1.7.14. --num-redundant-experts

Number of redundant experts to use for expert parallelism.

Default: 0

2.1.7.15. --eplb-window-size

Window size for expert load recording.

Default: 1000

2.1.7.16. --eplb-step-interval

Interval for rearranging experts in expert parallelism.

Note that if this is greater than the EPLB window size, only the metrics of the last eplb_window_size steps will be used for rearranging experts.

Default: 3000

2.1.7.17. --eplb-log-balancedness, --no-eplb-log-balancedness

Log the balancedness each step of expert parallelism. This is turned off by default since it will cause communication overhead.

Default: False

2.1.7.18. --max-parallel-loading-workers

Maximum number of parallel loading workers when loading model sequentially in multiple batches. To avoid RAM OOM when using tensor parallel and large models.

Default: None

2.1.7.19. --ray-workers-use-nsight, --no-ray-workers-use-nsight

Whether to profile Ray workers with nsight, see https://docs.ray.io/en/latest/ray-observability/user-guides/profiling.html#profiling-nsight-profiler.

Default: False

2.1.7.20. --disable-custom-all-reduce, --no-disable-custom-all-reduce

Disable the custom all-reduce kernel and fall back to NCCL.

Default: False

2.1.7.21. --worker-cls

The full name of the worker class to use. If "auto", the worker class will be determined based on the platform.

Default: auto

2.1.7.22. --worker-extension-cls

The full name of the worker extension class to use. The worker extension class is dynamically inherited by the worker class. This is used to inject new attributes and methods to the worker class for use in collective_rpc calls.

Default: ``

Use data parallelism instead of tensor parallelism for vision encoder. Only support LLama4 for now

Default: False

2.1.8. CacheConfig

Configuration for the KV cache.

2.1.8.1. --block-size

Possible choices: 1, 8, 16, 32, 64, 128

Size of a contiguous cache block in number of tokens. This is ignored on neuron devices and set to --max-model-len. On CUDA devices, only block sizes up to 32 are supported. On HPU devices, block size defaults to 128.

This config has no static default. If left unspecified by the user, it will be set in Platform.check_and_update_config() based on the current platform.

Default: None

2.1.8.2. --gpu-memory-utilization

The fraction of GPU memory to be used for the model executor, which can range from 0 to 1. For example, a value of 0.5 would imply 50%% GPU memory utilization. If unspecified, will use the default value of 0.9. This is a per-instance limit, and only applies to the current vLLM instance. It does not matter if you have another vLLM instance running on the same GPU. For example, if you have two vLLM instances running on the same GPU, you can set the GPU memory utilization to 0.5 for each instance.

Default: 0.9

2.1.8.3. --swap-space

Size of the CPU swap space per GPU (in GiB).

Default: 4

2.1.8.4. --kv-cache-dtype

Possible choices: auto, fp8, fp8_e4m3, fp8_e5m2, fp8_inc

Data type for kv cache storage. If "auto", will use model data type. CUDA 11.8+ supports fp8 (=fp8_e4m3) and fp8_e5m2. ROCm (AMD GPU) supports fp8 (=fp8_e4m3). Intel Gaudi (HPU) supports fp8 (using fp8_inc).

Default: auto

2.1.8.5. --num-gpu-blocks-override

Number of GPU blocks to use. This overrides the profiled num_gpu_blocks if specified. Does nothing if None. Used for testing preemption.

Default: None

2.1.8.6. --enable-prefix-caching, --no-enable-prefix-caching

Whether to enable prefix caching. Disabled by default for V0. Enabled by default for V1.

Default: None

2.1.8.7. --prefix-caching-hash-algo

Possible choices: builtin, sha256, sha256_cbor_64bit

Set the hash algorithm for prefix caching:

  • "builtin" is Python’s built-in hash.
  • "sha256" is collision resistant but with certain overheads. This option uses Pickle for object serialization before hashing.
  • "sha256_cbor_64bit" provides a reproducible, cross-language compatible hash. It serializes objects using canonical CBOR and hashes them with SHA-256. The resulting hash consists of the lower 64 bits of the SHA-256 digest.

Default: builtin

2.1.8.8. --cpu-offload-gb

The space in GiB to offload to CPU, per GPU. Default is 0, which means no offloading. Intuitively, this argument can be seen as a virtual way to increase the GPU memory size. For example, if you have one 24 GB GPU and set this to 10, virtually you can think of it as a 34 GB GPU. Then you can load a 13B model with BF16 weight, which requires at least 26GB GPU memory. Note that this requires fast CPU-GPU interconnect, as part of the model is loaded from CPU memory to GPU memory on the fly in each model forward pass.

Default: 0

2.1.8.9. --calculate-kv-scales, --no-calculate-kv-scales

This enables dynamic calculation of k_scale and v_scale when kv_cache_dtype is fp8. If False, the scales will be loaded from the model checkpoint if available. Otherwise, the scales will default to 1.0.

Default: False

2.1.8.10. --kv-sharing-fast-prefill, --no-kv-sharing-fast-prefill

This feature is work in progress and no prefill optimization takes place with this flag enabled currently.

In some KV sharing setups, e.g. YOCO (https://arxiv.org/abs/2405.05254), some layers can skip tokens corresponding to prefill. This flag enables attention metadata for eligible layers to be overriden with metadata necessary for implementating this optimization in some models (e.g. Gemma3n)

Default: False

2.1.8.11. --mamba-cache-dtype

Possible choices: auto, float32

The data type to use for the Mamba cache (both the conv as well as the ssm state). If set to 'auto', the data type will be inferred from the model config.

Default: auto

2.1.8.12. --mamba-ssm-cache-dtype

Possible choices: auto, float32

The data type to use for the Mamba cache (ssm state only, conv state will still be controlled by mamba_cache_dtype). If set to 'auto', the data type for the ssm state will be determined by mamba_cache_dtype.

Default: auto

2.1.9. MultiModalConfig

Controls the behavior of multimodal models.

2.1.9.1. --limit-mm-per-prompt

The maximum number of input items allowed per prompt for each modality. Defaults to 1 (V0) or 999 (V1) for each modality.

For example, to allow up to 16 images and 2 videos per prompt: {"image": 16, "video": 2}

Should either be a valid JSON string or JSON keys passed individually.

Default: {}

2.1.9.2. --media-io-kwargs

Additional args passed to process media inputs, keyed by modalities. For example, to set num_frames for video, set --media-io-kwargs '{"video": {"num_frames": 40} }'

Should either be a valid JSON string or JSON keys passed individually.

Default: {}

2.1.9.3. --mm-processor-kwargs

Overrides for the multi-modal processor obtained from transformers.AutoProcessor.from_pretrained.

The available overrides depend on the model that is being run.

For example, for Phi-3-Vision: {"num_crops": 4}.

Should either be a valid JSON string or JSON keys passed individually.

Default: None

2.1.9.4. --mm-processor-cache-gb

The size (in GiB) of the multi-modal processor cache, which is used to

This cache is duplicated for each API process and engine core process, resulting in a total memory usage of mm_processor_cache_gb * (api_server_count + data_parallel_size).

Set to 0 to disable this cache completely (not recommended).

Default: 4

2.1.9.5. --disable-mm-preprocessor-cache

Default: False

2.1.9.6. --interleave-mm-strings, --no-interleave-mm-strings

Enable fully interleaved support for multimodal prompts.

Default: False

2.1.9.7. --skip-mm-profiling, --no-skip-mm-profiling

When enabled, skips multimodal memory profiling and only profiles with language backbone model during engine initialization.

This reduces engine startup time but shifts the responsibility to users for estimating the peak memory usage of the activation of multimodal encoder and embedding cache.

Default: False

2.1.10. LoRAConfig

Configuration for LoRA.

2.1.10.1. --enable-lora, --no-enable-lora

If True, enable handling of LoRA adapters.

Default: None

2.1.10.2. --enable-lora-bias, --no-enable-lora-bias

Enable bias for LoRA adapters.

Default: False

2.1.10.3. --max-loras

Max number of LoRAs in a single batch.

Default: 1

2.1.10.4. --max-lora-rank

Max LoRA rank.

Default: 16

2.1.10.5. --lora-extra-vocab-size

Maximum size of extra vocabulary that can be present in a LoRA adapter (added to the base model vocabulary).

Default: 256

2.1.10.6. --lora-dtype

Possible choices: auto, bfloat16, float16

Data type for LoRA. If auto, will default to base model dtype.

Default: auto

2.1.10.7. --max-cpu-loras

Maximum number of LoRAs to store in CPU memory. Must be >= than max_loras.

Default: None

2.1.10.8. --fully-sharded-loras, --no-fully-sharded-loras

By default, only half of the LoRA computation is sharded with tensor parallelism. Enabling this will use the fully sharded layers. At high sequence length, max rank or tensor parallel size, this is likely faster.

Default: False

2.1.10.9. --default-mm-loras

Dictionary mapping specific modalities to LoRA model paths; this field is only applicable to multimodal models and should be leveraged when a model always expects a LoRA to be active when a given modality is present. Note that currently, if a request provides multiple additional modalities, each of which have their own LoRA, we do NOT apply default_mm_loras because we currently only support one lora adapter per prompt. When run in offline mode, the lora IDs for n modalities will be automatically assigned to 1-n with the names of the modalities in alphabetic order.

Should either be a valid JSON string or JSON keys passed individually.

Default: None

2.1.11. ObservabilityConfig

Configuration for observability - metrics and tracing.

2.1.11.1. --show-hidden-metrics-for-version

Enable deprecated Prometheus metrics that have been hidden since the specified version. For example, if a previously deprecated metric has been hidden since the v0.7.0 release, you use --show-hidden-metrics-for-version=0.7 as a temporary escape hatch while you migrate to new metrics. The metric is likely to be removed completely in an upcoming release.

Default: None

2.1.11.2. --otlp-traces-endpoint

Target URL to which OpenTelemetry traces will be sent.

Default: None

2.1.11.3. --collect-detailed-traces

Possible choices: all, model, worker, None, model,worker, model,all, worker,model, worker,all, all,model, all,worker

It makes sense to set this only if --otlp-traces-endpoint is set. If set, it will collect detailed traces for the specified modules. This involves use of possibly costly and or blocking operations and hence might have a performance impact.

Note that collecting detailed timing information for each request can be expensive.

Default: None

2.1.12. SchedulerConfig

Scheduler configuration.

2.1.12.1. --max-num-batched-tokens

Maximum number of tokens to be processed in a single iteration.

This config has no static default. If left unspecified by the user, it will be set in EngineArgs.create_engine_config based on the usage context.

Default: None

2.1.12.2. --max-num-seqs

Maximum number of sequences to be processed in a single iteration.

This config has no static default. If left unspecified by the user, it will be set in EngineArgs.create_engine_config based on the usage context.

Default: None

2.1.12.3. --max-num-partial-prefills

For chunked prefill, the maximum number of sequences that can be partially prefilled concurrently.

Default: 1

2.1.12.4. --max-long-partial-prefills

For chunked prefill, the maximum number of prompts longer than long_prefill_token_threshold that will be prefilled concurrently. Setting this less than max_num_partial_prefills will allow shorter prompts to jump the queue in front of longer prompts in some cases, improving latency.

Default: 1

2.1.12.5. --cuda-graph-sizes

Cuda graph capture sizes

  1. if none provided, then default set to [min(max_num_seqs * 2, 512)]
  2. if one value is provided, then the capture list would follow the pattern: [1, 2, 4] + [i for i in range(8, cuda_graph_sizes + 1, 8)]
  3. more than one value (e.g. 1 2 128) is provided, then the capture list will follow the provided list.

Default: []

2.1.12.6. --long-prefill-token-threshold

For chunked prefill, a request is considered long if the prompt is longer than this number of tokens.

Default: 0

2.1.12.7. --num-lookahead-slots

The number of slots to allocate per sequence per step, beyond the known token ids. This is used in speculative decoding to store KV activations of tokens which may or may not be accepted.

Note

This will be replaced by speculative config in the future; it is present to enable correctness tests until then.

Default: 0

2.1.12.8. --scheduler-delay-factor

Apply a delay (of delay factor multiplied by previous prompt latency) before scheduling next prompt.

Default: 0.0

2.1.12.9. --preemption-mode

Possible choices: recompute, swap, None

Whether to perform preemption by swapping or recomputation. If not specified, we determine the mode as follows: We use recomputation by default since it incurs lower overhead than swapping. However, when the sequence group has multiple sequences (e.g., beam search), recomputation is not currently supported. In such a case, we use swapping instead.

Default: None

2.1.12.10. --scheduling-policy

Possible choices: fcfs, priority

The scheduling policy to use:

  • "fcfs" means first come first served, i.e. requests are handled in order of arrival.
  • "priority" means requests are handled based on given priority (lower value means earlier handling) and time of arrival deciding any ties).

Default: fcfs

2.1.12.11. --enable-chunked-prefill, --no-enable-chunked-prefill

If True, prefill requests can be chunked based on the remaining max_num_batched_tokens.

Default: None

2.1.12.12. --disable-chunked-mm-input, --no-disable-chunked-mm-input

If set to true and chunked prefill is enabled, we do not want to partially schedule a multimodal item. Only used in V1 This ensures that if a request has a mixed prompt (like text tokens TTTT followed by image tokens IIIIIIIIII) where only some image tokens can be scheduled (like TTTTIIIII, leaving IIIII), it will be scheduled as TTTT in one step and IIIIIIIIII in the next.

Default: False

2.1.12.13. --scheduler-cls

The scheduler class to use. "vllm.core.scheduler.Scheduler" is the default scheduler. Can be a class directly or the path to a class of form "mod.custom_class".

Default: vllm.core.scheduler.Scheduler

If set to True, KV cache manager will allocate the same size of KV cache for all attention layers even if there are multiple type of attention layers like full attention and sliding window attention.

Default: False

2.1.12.15. --async-scheduling, --no-async-scheduling
Note

This is an experimental feature.

If set to True, perform async scheduling. This may help reduce the CPU overheads, leading to better latency and throughput. However, async scheduling is currently not supported with some features such as structured outputs, speculative decoding, and pipeline parallelism.

Default: False

2.1.13. VllmConfig

Dataclass which contains all vllm-related configuration. This simplifies passing around the distinct configurations in the codebase.

2.1.13.1. --speculative-config

Speculative decoding configuration.

Should either be a valid JSON string or JSON keys passed individually.

Default: None

2.1.13.2. --kv-transfer-config

The configurations for distributed KV cache transfer.

Should either be a valid JSON string or JSON keys passed individually.

Default: None

2.1.13.3. --kv-events-config

The configurations for event publishing.

Should either be a valid JSON string or JSON keys passed individually.

Default: None

2.1.13.4. --compilation-config, -O

torch.compile and cudagraph capture configuration for the model.

As a shorthand, -O<n> can be used to directly specify the compilation level n: -O3 is equivalent to -O.level=3 (same as -O='{"level":3}'). Currently, -O <n>and -O=<n> are supported as well but this will likely be removed in favor of clearer -O<n>syntax in the future.</n></n></n>

Note

level 0 is the default level without any optimization. level 1 and 2 are for internal testing only. level 3 is the recommended level for production, also default in V1.

You can specify the full compilation config like so: {"level": 3, "cudagraph_capture_sizes": [1, 2, 4, 8]}

Should either be a valid JSON string or JSON keys passed individually.

Default:

{
  "level": null,
  "debug_dump_path": "",
  "cache_dir": "",
  "backend": "",
  "custom_ops": [],
  "splitting_ops": null,
  "use_inductor": true,
  "compile_sizes": null,
  "inductor_compile_config": {
    "enable_auto_functionalized_v2": false
  },
  "inductor_passes": {},
  "cudagraph_mode": null,
  "use_cudagraph": true,
  "cudagraph_num_of_warmups": 0,
  "cudagraph_capture_sizes": null,
  "cudagraph_copy_inputs": false,
  "full_cuda_graph": false,
  "pass_config": {},
  "max_capture_size": null,
  "local_cache_dir": null
}
Copy to Clipboard Toggle word wrap
2.1.13.5. --additional-config

Additional config for specified platform. Different platforms may support different configs. Make sure the configs are valid for the platform you are using. Contents must be hashable.

Default: {}

2.2. vllm chat arguments

Generate chat completions with the running API server.

$ vllm chat [options]
Copy to Clipboard Toggle word wrap
--api-key API_KEY

OpenAI API key. If provided, this API key overrides the API key set in the environment variables.

Default: None

--model-name MODEL_NAME

The model name used in prompt completion, defaults to the first model in list models API call.

Default: None

--system-prompt SYSTEM_PROMPT

The system prompt to be added to the chat template, used for models that support system prompts.

Default: None

--url URL

URL of the running OpenAI-compatible RESTful API server

Default: http://localhost:8000/v1

-q MESSAGE, --quick MESSAGE

Send a single prompt as MESSAGE and print the response, then exit.

Default: None

2.3. vllm complete arguments

Generate text completions based on the given prompt with the running API server.

$ vllm complete [options]
Copy to Clipboard Toggle word wrap
--api-key API_KEY

API key for OpenAI services. If provided, this API key overrides the API key set in the environment variables.

Default: None

--model-name MODEL_NAME

The model name used in prompt completion, defaults to the first model in list models API call.

Default: None

--url URL

URL of the running OpenAI-compatible RESTful API server

Default: http://localhost:8000/v1

-q PROMPT, --quick PROMPT

Send a single prompt and print the completion output, then exit.

Default: None

2.4. vllm bench arguments

Benchmark online serving throughput.

$ vllm bench [options]
Copy to Clipboard Toggle word wrap
bench

Positional arguments:

  • latency - Benchmarks the latency of a single batch of requests.
  • serve - Benchmarks the online serving throughput.
  • throughput - Benchmarks offline inference throughput.

2.5. vllm collect-env arguments

Collect environment information.

$ vllm collect-env
Copy to Clipboard Toggle word wrap

2.6. vllm run-batch arguments

Run batch inference jobs for the specified model.

$ vllm run-batch
Copy to Clipboard Toggle word wrap
--disable-log-requests

Disable logging requests.

Default: False

--disable-log-stats

Disable logging statistics.

Default: False

--enable-metrics

Enables Prometheus metrics.

Default: False

--enable-prompt-tokens-details

Enables prompt_tokens_details in usage when set to True.

Default: False

--max-log-len MAX_LOG_LEN

Maximum number of prompt characters or prompt ID numbers printed in the log.

Default: Unlimited

--output-tmp-dir OUTPUT_TMP_DIR

The directory to store the output file before uploading it to the output URL.

Default: None

--port PORT

Port number for the Prometheus metrics server. Only needed if enable-metrics is set.

Default: 8000

--response-role RESPONSE_ROLE

The role name to return if request.add_generation_prompt=True.

Default: assistant

--url URL

Prometheus metrics server URL. Only required if enable-metrics is set).

Default: 0.0.0.0

--use-v2-block-manager

DEPRECATED. Block manager v1 has been removed. SelfAttnBlockSpaceManager (block manager v2) is now the default. Setting --use-v2-block-manager flag to True or False has no effect on vLLM behavior.

Default: True

-i INPUT_FILE, --input-file INPUT_FILE

The path or URL to a single input file. Supports local file paths and HTTP or HTTPS. If a URL is specified, the file should be available using HTTP GET.

Default: None

-o OUTPUT_FILE, --output-file OUTPUT_FILE

The path or URL to a single output file. Supports local file paths and HTTP or HTTPS. If a URL is specified, the file should be available using HTTP PUT.

Default: None

Chapter 3. Environment variables

You can use environment variables to configure the system-level installation, build, logging behavior of AI Inference Server.

Important

VLLM_PORT and VLLM_HOST_IP set the host ports and IP address for internal usage of AI Inference Server. It is not the port and IP address for the API server. Do not use --host $VLLM_HOST_IP and --port $VLLM_PORT to start the API server.

Important

All environment variables used by AI Inference Server are prefixed with VLLM_. If you are using Kubernetes, do not name the service vllm, otherwise environment variables set by Kubernetes might come into conflict with AI Inference Server environment variables. This is because Kubernetes sets environment variables for each service with the capitalized service name as the prefix. For more information, see Kubernetes environment variables.

Expand
Table 3.1. AI Inference Server environment variables
Environment variableDescription

VLLM_TARGET_DEVICE

Target device of vLLM, supporting cuda (by default), rocm, neuron, cpu, openvino.

MAX_JOBS

Maximum number of compilation jobs to run in parallel. By default, this is the number of CPUs.

NVCC_THREADS

Number of threads to use for nvcc. By default, this is 1. If set, MAX_JOBS will be reduced to avoid oversubscribing the CPU.

VLLM_USE_PRECOMPILED

If set, AI Inference Server uses precompiled binaries (\*.so).

VLLM_TEST_USE_PRECOMPILED_NIGHTLY_WHEEL

Whether to force using nightly wheel in Python build for testing.

CMAKE_BUILD_TYPE

CMake build type. Available options: "Debug", "Release", "RelWithDebInfo".

VERBOSE

If set, AI Inference Server prints verbose logs during installation.

VLLM_CONFIG_ROOT

Root directory for AI Inference Server configuration files.

VLLM_CACHE_ROOT

Root directory for AI Inference Server cache files.

VLLM_HOST_IP

Used in a distributed environment to determine the IP address of the current node.

VLLM_PORT

Used in a distributed environment to manually set the communication port.

VLLM_RPC_BASE_PATH

Path used for IPC when the frontend API server is running in multi-processing mode.

VLLM_USE_MODELSCOPE

If true, will load models from ModelScope instead of Hugging Face Hub.

VLLM_RINGBUFFER_WARNING_INTERVAL

Interval in seconds to log a warning message when the ring buffer is full.

CUDA_HOME

Path to cudatoolkit home directory, under which should be bin, include, and lib directories.

VLLM_NCCL_SO_PATH

Path to the NCCL library file. Needed for versions of NCCL >= 2.19 due to a bug in PyTorch.

LD_LIBRARY_PATH

Used when VLLM_NCCL_SO_PATH is not set, AI Inference Server tries to find the NCCL library in this path.

VLLM_USE_TRITON_FLASH_ATTN

Flag to control if you wantAI Inference Server to use Triton Flash Attention.

VLLM_FLASH_ATTN_VERSION

Force AI Inference Server to use a specific flash-attention version (2 or 3), only valid with the flash-attention backend.

VLLM_TEST_DYNAMO_FULLGRAPH_CAPTURE

Internal flag to enable Dynamo fullgraph capture.

LOCAL_RANK

Local rank of the process in the distributed setting, used to determine the GPU device ID.

CUDA_VISIBLE_DEVICES

Used to control the visible devices in a distributed setting.

VLLM_ENGINE_ITERATION_TIMEOUT_S

Timeout for each iteration in the engine.

VLLM_API_KEY

API key for AI Inference Server API server.

S3_ACCESS_KEY_ID

S3 access key ID for tensorizer to load model from S3.

S3_SECRET_ACCESS_KEY

S3 secret access key for tensorizer to load model from S3.

S3_ENDPOINT_URL

S3 endpoint URL for tensorizer to load model from S3.

VLLM_USAGE_STATS_SERVER

URL for AI Inference Server usage stats server.

VLLM_NO_USAGE_STATS

If true, disables collection of usage stats.

VLLM_DO_NOT_TRACK

If true, disables tracking of AI Inference Server usage stats.

VLLM_USAGE_SOURCE

Source for usage stats collection.

VLLM_CONFIGURE_LOGGING

If set to 1, AI Inference Server configures logging using the default configuration or the specified config path.

VLLM_LOGGING_CONFIG_PATH

Path to the logging configuration file.

VLLM_LOGGING_LEVEL

Default logging level for vLLM.

VLLM_LOGGING_PREFIX

If set, AI Inference Server prepends this prefix to all log messages.

VLLM_LOGITS_PROCESSOR_THREADS

Number of threads used for custom logits processors.

VLLM_TRACE_FUNCTION

If set to 1, AI Inference Server traces function calls for debugging.

VLLM_ATTENTION_BACKEND

Backend for attention computation, for example , "TORCH_SDPA", "FLASH_ATTN", "XFORMERS").

VLLM_USE_FLASHINFER_SAMPLER

If set, AI Inference Server uses the FlashInfer sampler.

VLLM_FLASHINFER_FORCE_TENSOR_CORES

Forces FlashInfer to use tensor cores; otherwise uses heuristics.

VLLM_PP_LAYER_PARTITION

Pipeline stage partition strategy.

VLLM_CPU_KVCACHE_SPACE

CPU key-value cache space (default is 4GB).

VLLM_CPU_OMP_THREADS_BIND

CPU core IDs bound by OpenMP threads.

VLLM_CPU_MOE_PREPACK

Whether to use prepack for MoE layer on unsupported CPUs.

VLLM_OPENVINO_DEVICE

OpenVINO device selection (default is CPU).

VLLM_OPENVINO_KVCACHE_SPACE

OpenVINO key-value cache space (default is 4GB).

VLLM_OPENVINO_CPU_KV_CACHE_PRECISION

Precision for OpenVINO KV cache.

VLLM_OPENVINO_ENABLE_QUANTIZED_WEIGHTS

Enables weights compression during model export by using HF Optimum.

VLLM_USE_RAY_SPMD_WORKER

Enables Ray SPMD worker for execution on all workers.

VLLM_USE_RAY_COMPILED_DAG

Uses the Compiled Graph API provided by Ray to optimize control plane overhead.

VLLM_USE_RAY_COMPILED_DAG_NCCL_CHANNEL

Enables NCCL communication in the Compiled Graph provided by Ray.

VLLM_USE_RAY_COMPILED_DAG_OVERLAP_COMM

Enables GPU communication overlap in the Compiled Graph provided by Ray.

VLLM_WORKER_MULTIPROC_METHOD

Specifies the method for multiprocess workers, for example, "fork").

VLLM_ASSETS_CACHE

Path to the cache for storing downloaded assets.

VLLM_IMAGE_FETCH_TIMEOUT

Timeout for fetching images when serving multimodal models (default is 5 seconds).

VLLM_VIDEO_FETCH_TIMEOUT

Timeout for fetching videos when serving multimodal models (default is 30 seconds).

VLLM_AUDIO_FETCH_TIMEOUT

Timeout for fetching audio when serving multimodal models (default is 10 seconds).

VLLM_MM_INPUT_CACHE_GIB

Cache size in GiB for multimodal input cache (default is 8GiB).

VLLM_XLA_CACHE_PATH

Path to the XLA persistent cache directory (only for XLA devices).

VLLM_XLA_CHECK_RECOMPILATION

If set, asserts on XLA recompilation after each execution step.

VLLM_FUSED_MOE_CHUNK_SIZE

Chunk size for fused MoE layer (default is 32768).

VLLM_NO_DEPRECATION_WARNING

If true, skips deprecation warnings.

VLLM_KEEP_ALIVE_ON_ENGINE_DEATH

If true, keeps the OpenAI API server alive even after engine errors.

VLLM_ALLOW_LONG_MAX_MODEL_LEN

Allows specifying a max sequence length greater than the default length of the model.

VLLM_TEST_FORCE_FP8_MARLIN

Forces FP8 Marlin for FP8 quantization regardless of hardware support.

VLLM_TEST_FORCE_LOAD_FORMAT

Forces a specific load format.

VLLM_RPC_TIMEOUT

Timeout for fetching response from backend server.

VLLM_PLUGINS

List of plugins to load.

VLLM_TORCH_PROFILER_DIR

Directory for saving Torch profiler traces.

VLLM_USE_TRITON_AWQ

If set, uses Triton implementations of AWQ.

VLLM_ALLOW_RUNTIME_LORA_UPDATING

If set, allows updating Lora adapters at runtime.

VLLM_SKIP_P2P_CHECK

Skips peer-to-peer capability check.

VLLM_DISABLED_KERNELS

List of quantization kernels to disable for performance comparisons.

VLLM_USE_V1

If set, uses V1 code path.

VLLM_ROCM_FP8_PADDING

Pads FP8 weights to 256 bytes for ROCm.

Q_SCALE_CONSTANT

Divisor for dynamic query scale factor calculation for FP8 KV Cache.

K_SCALE_CONSTANT

Divisor for dynamic key scale factor calculation for FP8 KV Cache.

V_SCALE_CONSTANT

Divisor for dynamic value scale factor calculation for FP8 KV Cache.

VLLM_ENABLE_V1_MULTIPROCESSING

If set, enables multiprocessing in LLM for the V1 code path.

VLLM_LOG_BATCHSIZE_INTERVAL

Time interval for logging batch size.

VLLM_SERVER_DEV_MODE

If set, AI Inference Server runs in development mode, enabling additional endpoints for debugging, for example /reset_prefix_cache).

VLLM_V1_OUTPUT_PROC_CHUNK_SIZE

Controls the maximum number of requests to handle in a single asyncio task for processing per-token outputs in the V1 AsyncLLM interface. It affects high-concurrency streaming requests.

VLLM_MLA_DISABLE

If set, AI Inference Server disables the MLA attention optimizations.

VLLM_ENABLE_MOE_ALIGN_BLOCK_SIZE_TRITON

If set, AI Inference Server uses the Triton implementation of moe_align_block_size, for example, moe_align_block_size_triton in fused_moe.py.

VLLM_RAY_PER_WORKER_GPUS

Number of GPUs per worker in Ray. Can be a fraction to allow Ray to schedule multiple actors on a single GPU.

VLLM_RAY_BUNDLE_INDICES

Specifies the indices used for the Ray bundle, for each worker. Format: comma-separated list of integers (e.g., "0,1,2,3").

VLLM_CUDART_SO_PATH

Specifies the path for the find_loaded_library() method when it may not work properly. Set by using the VLLM_CUDART_SO_PATH environment variable.

VLLM_USE_HPU_CONTIGUOUS_CACHE_FETCH

Enables contiguous cache fetching to avoid costly gather operations on Gaudi3. Only applicable to HPU contiguous cache.

VLLM_DP_RANK

Rank of the process in the data parallel setting.

VLLM_DP_SIZE

World size of the data parallel setting.

VLLM_DP_MASTER_IP

IP address of the master node in the data parallel setting.

VLLM_DP_MASTER_PORT

Port of the master node in the data parallel setting.

VLLM_CI_USE_S3

Whether to use the S3 path for model loading in CI by using RunAI Streamer.

VLLM_MARLIN_USE_ATOMIC_ADD

Whether to use atomicAdd reduce in gptq/awq marlin kernel.

VLLM_V0_USE_OUTLINES_CACHE

Whether to turn on the outlines cache for V0. This cache is unbounded and on disk, so it is unsafe for environments with malicious users.

VLLM_TPU_DISABLE_TOPK_TOPP_OPTIMIZATION

If set, disables TPU-specific optimization for top-k & top-p sampling.

Chapter 4. Viewing AI Inference Server metrics

vLLM exposes various metrics via the /metrics endpoint on the AI Inference Server OpenAI-compatible API server.

You can start the server by using Python, or using Docker.

Procedure

  1. Launch the AI Inference Server server and load your model as shown in the following example. The command also exposes the OpenAI-compatible API.

    $ vllm serve unsloth/Llama-3.2-1B-Instruct
    Copy to Clipboard Toggle word wrap
  2. Query the /metrics endpoint of the OpenAI-compatible API to get the latest metrics from the server:

    $ curl http://0.0.0.0:8000/metrics
    Copy to Clipboard Toggle word wrap

    Example output

    # HELP vllm:iteration_tokens_total Histogram of number of tokens per engine_step.
    # TYPE vllm:iteration_tokens_total histogram
    vllm:iteration_tokens_total_sum{model_name="unsloth/Llama-3.2-1B-Instruct"} 0.0
    vllm:iteration_tokens_total_bucket{le="1.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
    vllm:iteration_tokens_total_bucket{le="8.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
    vllm:iteration_tokens_total_bucket{le="16.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
    vllm:iteration_tokens_total_bucket{le="32.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
    vllm:iteration_tokens_total_bucket{le="64.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
    vllm:iteration_tokens_total_bucket{le="128.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
    vllm:iteration_tokens_total_bucket{le="256.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
    vllm:iteration_tokens_total_bucket{le="512.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
    #...
    Copy to Clipboard Toggle word wrap

Chapter 5. AI Inference Server metrics

AI Inference Server exposes vLLM metrics that you can use to monitor the health of the system.

Expand
Table 5.1. vLLM metrics
Metric NameDescription

vllm:num_requests_running

Number of requests currently running on GPU.

vllm:num_requests_waiting

Number of requests waiting to be processed.

vllm:lora_requests_info

Running stats on LoRA requests.

vllm:num_requests_swapped

Number of requests swapped to CPU. Deprecated: KV cache offloading is not used in V1.

vllm:gpu_cache_usage_perc

GPU KV-cache usage. A value of 1 means 100% usage.

vllm:cpu_cache_usage_perc

CPU KV-cache usage. A value of 1 means 100% usage. Deprecated: KV cache offloading is not used in V1.

vllm:cpu_prefix_cache_hit_rate

CPU prefix cache block hit rate. Deprecated: KV cache offloading is not used in V1.

vllm:gpu_prefix_cache_hit_rate

GPU prefix cache block hit rate. Deprecated: Use vllm:gpu_prefix_cache_queries and vllm:gpu_prefix_cache_hits in V1.

vllm:num_preemptions_total

Cumulative number of preemptions from the engine.

vllm:prompt_tokens_total

Total number of prefill tokens processed.

vllm:generation_tokens_total

Total number of generation tokens processed.

vllm:iteration_tokens_total

Histogram of the number of tokens per engine step.

vllm:time_to_first_token_seconds

Histogram of time to the first token in seconds.

vllm:time_per_output_token_seconds

Histogram of time per output token in seconds.

vllm:e2e_request_latency_seconds

Histogram of end-to-end request latency in seconds.

vllm:request_queue_time_seconds

Histogram of time spent in the WAITING phase for a request.

vllm:request_inference_time_seconds

Histogram of time spent in the RUNNING phase for a request.

vllm:request_prefill_time_seconds

Histogram of time spent in the PREFILL phase for a request.

vllm:request_decode_time_seconds

Histogram of time spent in the DECODE phase for a request.

vllm:time_in_queue_requests

Histogram of time the request spent in the queue in seconds. Deprecated: Use vllm:request_queue_time_seconds instead.

vllm:model_forward_time_milliseconds

Histogram of time spent in the model forward pass in milliseconds. Deprecated: Use prefill/decode/inference time metrics instead.

vllm:model_execute_time_milliseconds

Histogram of time spent in the model execute function in milliseconds. Deprecated: Use prefill/decode/inference time metrics instead.

vllm:request_prompt_tokens

Histogram of the number of prefill tokens processed.

vllm:request_generation_tokens

Histogram of the number of generation tokens processed.

vllm:request_max_num_generation_tokens

Histogram of the maximum number of requested generation tokens.

vllm:request_params_n

Histogram of the n request parameter.

vllm:request_params_max_tokens

Histogram of the max_tokens request parameter.

vllm:request_success_total

Count of successfully processed requests.

vllm:spec_decode_draft_acceptance_rate

Speculative token acceptance rate.

vllm:spec_decode_efficiency

Speculative decoding system efficiency.

vllm:spec_decode_num_accepted_tokens_total

Total number of accepted tokens.

vllm:spec_decode_num_draft_tokens_total

Total number of draft tokens.

vllm:spec_decode_num_emitted_tokens_total

Total number of emitted tokens.

Chapter 6. Deprecated metrics

The following metrics are deprecated and will be removed in a future version of AI Inference Server:

  • vllm:num_requests_swapped
  • vllm:cpu_cache_usage_perc
  • vllm:cpu_prefix_cache_hit_rate (KV cache offloading is not used in V1).
  • vllm:gpu_prefix_cache_hit_rate. This metric is replaced by queries+hits counters in V1.
  • vllm:time_in_queue_requests. This metric is duplicated by vllm:request_queue_time_seconds.
  • vllm:model_forward_time_milliseconds
  • vllm:model_execute_time_milliseconds. Prefill, decode or inference time metrics should be used instead.
Important

When metrics are deprecated in version X.Y, they are hidden in version X.Y+1 but can be re-enabled by using the --show-hidden-metrics-for-version=X.Y escape hatch. Deprecated metrics are completely removed in the following version X.Y+2.

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Nach oben
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2025 Red Hat