vLLM server arguments


Red Hat AI Inference Server 3.0

Server arguments for running Red Hat AI Inference Server

Red Hat AI Documentation Team

Abstract

Learn how to configure and run Red Hat AI Inference Server.

Preface

Red Hat AI Inference Server provides an OpenAI-compatible API server for inference serving. You can control the behavior of the server with arguments.

This document begins with a list of the most important server arguments that you use with the vllm serve command. A complete list of vllm serve arguments, environment variables, server metrics are also provided.

Chapter 1. Key vLLM server arguments

There are 4 key arguments that you use to configure AI Inference Server to run on your hardware:

  1. --tensor-parallel-size: distributes your model across your host GPUs.
  2. --gpu-memory-utilization: adjusts accelerator memory utilization for model weights, activations, and KV cache. Measured as a fraction from 0.0 to 1.0 that defaults to 0.9. For example, you can set this value to 0.8 to limit GPU memory consumption by AI Inference Server to 80%. Use the largest value that is stable for your deployment to maximize throughput.
  3. --max-model-len: limits the maximum context length of the model, measured in tokens. Set this to prevent problems with memory if the model’s default context length is too long.
  4. --max-num-batched-tokens: limits the maximum batch size of tokens to process per step, measured in tokens. Increasing this improves throughput but can affect output token latency.

For example, to run the Red Hat AI Inference Server container and serve a model with vLLM, run the following, changing server arguments as required:

$ podman run --rm -it \
--device nvidia.com/gpu=all \
--security-opt=label=disable \
--shm-size=4GB -p 8000:8000 \
--userns=keep-id:uid=1001 \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
--env "HF_HUB_OFFLINE=0" \
--env=VLLM_NO_USAGE_STATS=1 \
-v ./rhaiis-cache:/opt/app-root/src/.cache \
registry.redhat.io/rhaiis/vllm-cuda-rhel9:3.0.0 \
--model RedHatAI/Llama-3.2-1B-Instruct-FP8 \
--tensor-parallel-size 2 \
--gpu-memory-utilization 0.8 \
--max-model-len 16384 \
--max-num-batched-tokens 2048 \
Copy to Clipboard Toggle word wrap

Chapter 2. Complete list of vLLM server arguments

The following is a comprehensive list of the vLLM server arguments that you can use with the vllm serve command. An explanation of each server argument and default values is provided.

2.1. vLLM server arguments

--model

Name or path of the Hugging Face model to use.

Default value: facebook/opt-125m

--task

The task to use the model for. Each AI Inference Server instance only supports one task, even if the same model can be used for multiple tasks. When the model only supports one task, auto can be used to select it; otherwise, you must specify explicitly which task to use.

Default value: auto

Options: auto, generate, embedding, embed, classify, score, reward, transcription

--tokenizer
Name or path of the Hugging Face tokenizer to use. If unspecified, model name or path is used.
--hf-config-path
Name or path of the Hugging Face config to use. If unspecified, model name or path is used.
--skip-tokenizer-init
Skip initialization of tokenizer and detokenizer. Expects valid prompt_token_ids and None for prompt from the input. The generated output will contain token ids.
--revision
The specific model version to use. It can be a branch name, a tag name, or a commit id. If unspecified, uses the default version.
--code-revision
The specific revision to use for the model code on Hugging Face Hub. It can be a branch name, a tag name, or a commit id. If unspecified, uses the default version.
--tokenizer-revision
Revision of the Hugging Face tokenizer to use. It can be a branch name, a tag name, or a commit id. If unspecified, uses the default version.
--tokenizer-mode

The tokenizer mode.

  • auto uses the fast tokenizer if available.
  • slow always use the slow tokenizer.
  • mistral always use the mistral_common tokenizer.
  • custom use --tokenizer to select the preregistered tokenizer.

Default value: auto

Options: auto, slow, mistral, custom

--trust-remote-code
Trust remote code from Hugging Face.
--allowed-local-media-path
Allowing API requests to read local images or videos from directories specified by the server file system. This is a security risk. Should only be enabled in trusted environments.
--download-dir
Directory to download and load the weights, default to the default cache dir of Hugging Face.
--load-format

The format of the model weights to load.

Default value: auto

Options: auto, pt, safetensors, npcache, dummy, tensorizer, sharded_state, gguf, bitsandbytes, mistral, runai_streamer

  • auto trys to load the weights in the safetensors format and fall back to the pytorch bin format if safetensors format is not available.
  • pt loads the weights in the pytorch bin format.
  • safetensors loads the weights in the safetensors format.
  • npcache loads the weights in pytorch format and store a numpy cache to speed up the loading.
  • dummy initializes the weights with random values, which is mainly for profiling.
  • tensorizer loads the weights using tensorizer from CoreWeave. See the Tensorize AI Inference Server Model script in the Examples section for more information.
  • runai_streamer loads the Safetensors weights using Run:aiModel Streamer
  • bitsandbytes loads the weights using bitsandbytes quantization.
--config-format

The format of the model config to load.

Options: auto, hf, mistral

auto trys to load the config in hf format if available, if it is not, trys to load in mistral format.

Default value: ConfigFormat.AUTO

--dtype

Data type for model weights and activations.

Default value: auto

Options: auto, half, float16, bfloat16, float, float32

  • auto uses FP16 precision for FP32 and FP16 models, and BF16 precision for BF16 models.
  • half for FP16. Recommended for AWQ quantization.
  • float16 is the same as half.
  • bfloat16 for a balance between precision and range.
  • float is shorthand for FP32 precision.
  • float32 for FP32 precision.
--kv-cache-dtype

Data type for kv cache storage. If auto, uses model data type. CUDA 11.8+ supports fp8 (=fp8_e4m3) and fp8_e5m2. ROCm (AMD GPU) supports fp8 (=fp8_e4m3)

Options: auto, fp8, fp8_e5m2, fp8_e4m3

Default value: auto

--max-model-len
Model context length. If unspecified, value is automatically derived from the model config.
--guided-decoding-backend

Which engine to use for guided decoding (JSON schema, regex, and so on) by default. Currently support outlines-dev/outlines, mlc-ai/xgrammar, and noamgat/lm-format-enforcer. Can be overridden per request via guided_decoding_backend parameter. Backend-specific options can be supplied in a comma-separated list following a colon after the backend name. Valid backends and all available options are:

  • xgrammar:no-fallback,
  • xgrammar:disable-any-whitespace,
  • outlines:no-fallback,
  • lm-format-enforcer:no-fallback

Default value: xgrammar

--logits-processor-pattern
Optional regex pattern specifying valid logits processor qualified names that can be passed with the logits_processors extra completion argument. Defaults to None, which allows no processors.
--model-impl

Which implementation of the model to use.

Default value: auto

Options: auto, vllm, transformers

  • auto trys to use the AI Inference Server implementation if it exists and fall back to the Transformers implementation if no AI Inference Server implementation is available.
  • vllm uses the AI Inference Server model implementation.
  • transformers uses the Transformers model implementation.
--distributed-executor-backend

Backend to use for distributed model workers, either ray or mp (multiprocessing). If the product of pipeline_parallel_size and tensor_parallel_size is less than or equal to the number of GPUs available, mp is used to keep processing on a single host. Otherwise, defaults to ray if Ray is installed and fail otherwise. Note that TPU only supports Ray for distributed inference.

Options: ray, mp, uni, external_launcher

--pipeline-parallel-size, -pp

Number of nodes to split the model across by dividing model layers into sequential pipeline stages.

Default value: 1

--tensor-parallel-size, -tp

Split the model across multiple GPUs to share storage and computation load.

Default value: 1

--enable-expert-parallel
Use expert parallelism instead of tensor parallelism for MoE layers.
--max-parallel-loading-workers
Load model sequentially in multiple batches, to avoid RAM OOM when using tensor parallel and large models.
--ray-workers-use-nsight
If specified, use nsight to profile Ray workers.
--block-size

Token block size for contiguous chunks of tokens. This is ignored on neuron devices and set to --max-model-len. On CUDA devices, only block sizes up to 32 are supported. On HPU devices, block size defaults to 128.

Options: 8, 16, 32, 64, 128

--enable-prefix-caching, --no-enable-prefix-caching
Enables automatic prefix caching. Use --no-enable-prefix-caching to disable explicitly.
--disable-sliding-window
Disables sliding window, capping to sliding window size.
--use-v2-block-manager
DEPRECATED: block manager v1 has been removed and SelfAttnBlockSpaceManager (block manager v2) is now the default. Setting this flag to True or False has no effect on AI Inference Server behavior.
--num-lookahead-slots

Experimental scheduling config necessary for speculative decoding. This is replaced by speculative config ins the future; it. is present to enable correctness tests until then.

Default value: 0

--seed
Random seed for operations.
--swap-space

CPU swap space size (GiB) per GPU.

Default value: 4

--cpu-offload-gb

The space in GiB to offload to CPU, per GPU. Default is 0, which means no offloading. Intuitively, this argument can be seen as a virtual way to increase the GPU memory size. For example, if you have one 24 GB GPU and set this to 10, virtually you can think of it as a 34 GB GPU. Then you can load a 13B model with BF16 weight, which requires at least 26GB GPU memory. Note that this requires fast CPU-GPU interconnect, as part of the model is loaded from CPU memory to GPU memory dynamically in each model forward pass.

Default value: 0

--gpu-memory-utilization

The fraction of GPU memory to be used for the model executor, which can range from 0 to 1. For example, a value of 0.5 would imply 50% GPU memory utilization. If unspecified, uses the default value of 0.9. This is a per-instance limit, and only applies to the current AI Inference Server instance. It does not matter if you have another AI Inference Server instance running on the same GPU. For example, if you have two AI Inference Server instances running on the same GPU, you can set the GPU memory utilization to 0.5 for each instance.

Default value: 0.9

--num-gpu-blocks-override
If specified, ignore GPU profiling result and use this number of GPU blocks. Used for testing preemption.
--max-num-batched-tokens
Maximum number of batched tokens per iteration. In vLLM, a batch is the set of all tokens from active sequences that are jointly fed into the model at each scheduler step. It is measured as "tokens per iteration" rather than "sequences per iteration".
--max-num-partial-prefills

For chunked prefill, the max number of concurrent partial prefills.Defaults to 1

Default value: 1

--max-long-partial-prefills

For chunked prefill, the maximum number of prompts longer than --long-prefill-token-threshold that is prefilled concurrently. Setting thiss less than --max-num.-partial-prefills will allow shorter prompts to jump the queue in front of longer prompts in some cases, improving latency. Defaults to 1.

Default value: 1

--long-prefill-token-threshold

For chunked prefill, a request is considered long if the prompt is longer than this number of tokens. Defaults to 4% of the model’s context length.

  • Default value: 0
--max-num-seqs
Maximum number of sequences per iteration.
--max-logprobs

Max number of log probs to return logprobs is specified in SamplingParams.

Default value: 20

--disable-log-stats
Disable logging statistics.
--quantization, -q

Method used to quantize the weights. If None, first check the quantization_config attribute in the model config file. If that is None, assume the model weights are not quantized and use dtype to determine the data type of the weights.

Options: aqlm, awq, deepspeedfp, tpu_int8, fp8, ptpc_fp8, fbgemm_fp8, modelopt, nvfp4, marlin, gguf, gptq_marlin_24, gptq_marlin, awq_marlin, gptq, compressed-tensors, bitsandbytes, qqq, hqq, experts_int8, neuron_quant, ipex, quark, moe_wna16, None

--rope-scaling
RoPE scaling configuration in JSON format. For example, {rope_type:`dynamic`,factor:2.0}
--rope-theta
RoPE theta. Use with rope_scaling. In some cases, changing the RoPE theta improves the performance of the scaled model.
--hf-overrides
Extra arguments for the HuggingFace config. This should be a JSON string that is parsed into a dictionarys.
--enforce.-eager
Always use eager-mode PyTorch. If False, uses eager mode and CUDA graph in hybrid for maximal performance and flexibility.
--max-seq-len-to-capture

Maximum sequence length covered by CUDA graphs. When a sequence has context length larger than this, AI Inference Server falls back to eager mode. Additionally for encoder-decoder models, if the sequence length of the encoder input is larger than this, AI Inference Server falls back to the eager mode.

Default value: 8192

--disable-custom-all-reduce
See ParallelConfig.
--tokenizer-pool-size

Size of tokenizer pool to use for asynchronous tokenization. If 0, uses synchronous tokenization.

Default value: 0

--tokenizer-pool-type

Type of tokenizer pool to use for asynchronous tokenization. Ignored if tokenizer_pool_size is 0.

Default value: ray

--tokenizer-pool-extra-config
Extra config for tokenizer pool. This should be a JSON string that is parsed into a dictionarys. Ignored if tokenizer_pool_size. is 0.
--limit-mm-per-prompt
For each multimodal plugin, limit how many input instances to allow for each prompt. Expects a comma-separated list of items, e.g.: image=16,video=2 allows a maximum of 16 images and 2 videos per prompt. Defaults to 1 for each modality.
--mm-processor-kwargs
Overrides for the multimodal input mapping and processing, e.g., image processor. For example: {num_crops: 4}.
--disable-mm-preprocessor-cache
If true, then disables caching of the multi-modal preprocessor and mapper. (not recommended)
--enable-lora
If True, enable handling of LoRA adapters.
--enable-lora-bias
If True, enable bias for LoRA adapters.
--max-loras

Max number of LoRAs in a single batch.

Default value: 1

--max-lora-rank

Max LoRA rank.

Default value: 16

--lora-extra-vocab-size

Maximum size of extra vocabulary that can be present in a LoRA adapter (added to the base model vocabulary).

Default value: 256

--lora-dtype

Data type for LoRA. If auto, will default to base model dtype.

Default value: auto

Options: auto, float16, bfloat16

--long-lora-scaling-factors
Specify multiple scaling factors (which can be different from base model scaling factor - see eg. Long LoRA) to allow for multiple LoRA adapters trained with those scaling factors to be used at the same time. If not specified, only adapters trained with the base model scaling factor are allowed.
--max-cpu-loras
Maximum number of LoRAs to store in CPU memory. Must be greater than max_loras. Defaults to max_loras.
--fully-sharded-loras
By default, only half of the LoRA computation is sharded with tensor parallelism. Enabling this uses the fully sharded layers. At high sequence length, max rank or tensor parallel size, this is likely faster.
--enable-prompt-adapter
If True, enable handling of PromptAdapters.
--max-prompt-adapters

Max number of PromptAdapters in a batch.

Default value: 1

--max-prompt-adapter-token

Max number of PromptAdapters tokens

Default value: 0

--device

Device type for AI Inference Server execution.

Options: auto, cuda, neuron, cpu, openvino, tpu, xpu, hpu

Default value: auto

--num-scheduler-steps

Maximum number of forward steps per scheduler call.

Default value: 1

--use-tqdm-on-load, --no-use-tqdm-on-load

Whether to enable or disable progress bar when loading model weights.

Default value: True

--multi-step-stream-outputs

If False, then multi-step will stream outputs at the end of all steps

Default value: True

--scheduler-delay-factor

Apply a delay (of delay factor multiplied by previous prompt latency) before scheduling next prompt.

Default value: 0.0

--enable-chunked-prefill
If set, the prefill requests can be chunked based on the max_num_batched_tokens.
--speculative-model
The name of the draft model to be used in speculative decoding.
--speculative-model-quantization

Method used to quantize the weights of speculative model. If None, AI Inference Server first checks the quantization_config attribute in the model config file. If that is None, AI Inference Server assumes the model weights are not quantized and use dtype to determine the data type of the weights.

Options: aqlm, awq, deepspeedfp, tpu_int8, fp8, ptpc_fp8, fbgemm_fp8, modelopt, nvfp4, marlin, gguf, gptq_marlin_24, gptq_marlin, awq_marlin, gptq, compressed-tensors, bitsandbytes, qqq, hqq, experts_int8, neuron_quant, ipex, quark, moe_wna16, None

--num-speculative-tokens
The number of speculative tokens to sample from the draft model in speculative decoding.
--speculative-disable-mqa-scorer
If set to True, the MQA scorer is disabled in speculative and falls back to batch expansion.
--speculative-draft-tensor-parallel-size, -spec-draft-tp
Number of tensor parallel replicas for the draft model in speculative decoding.
--speculative-max-model-len
The maximum sequence length supported by the draft model. Sequences over this length will skip speculation.
--speculative-disable-by-batch-size
Disable speculative decoding for new incoming requests if the number of enqueue requests is larger than this value.
--ngram-prompt-lookup-max
Max size of window for ngram prompt lookup in speculative decoding.
--ngram-prompt-lookup-min
Minimum size of window for ngram prompt lookup in speculative decoding.
--spec-decoding-acceptance-method

Specify the acceptance method to use during draft token verification in speculative decoding. Two types of acceptance routines are supported:

  1. RejectionSampler: Does not allow changing the acceptance rate of draft tokens,
  2. TypicalAcceptanceSampler: Configurable, allows for a higher acceptance rate at the cost of lower quality, and vice versa.

    Default value: rejection_sampler

    Options: rejection_sampler, typical_acceptance_sampler

--typical-acceptance-sampler-posterior-threshold
Set the lower bound threshold for the posterior probability of a token to be accepted. This threshold is used by the TypicalAcceptanceSampler to make sampling decisions during speculative decoding. Defaults to 0.09.
--typical-acceptance-sampler-posterior-alpha
A scaling factor for the entropy-based threshold for token acceptance in the TypicalAcceptanceSampler. Typically defaults to square root of --typical-acceptance-sampler-posterior-threshold, for example, 0.3.
--disable-logprobs-during-spec-decoding
If set to True, token log probabilities are not returned during speculative decoding. If set to False, log probabilities are returned according to the settings in SamplingParams. If not specified, it defaults to True. Disabling log probabilities during speculative decoding reduces latency by skipping logprob calculation in proposal sampling, target sampling, and after accepted tokens are determined.
--model-loader-extra-config
Extra config for model loader. This is passed to the model loaders corresponding to the chosen. load_format. This should be a JSON string that is parsed into a dictionarys.
--ignore.-patterns

The pattern(s) to ignore when loading the model. Defaults to original/**/* to avoid repeated loading of llama’s checkpoints.

Default value: []

--preemption-mode
If recompute, the engine performs preemption by recomputing; If swap, the engine performs preemption by block swapping.
--served-model-name
The model name(s) used in the API. If multiple names are provided, the server will respond to any of the provided names. The model name in the model field of a response is the first name in this list. If not specified, the model name is the same as the --models argument. Note. that the name(s) is also be used in model_name tag content for Prometheus metrics. If multiple names are provided, metrics tag will take the first one.
--qlora-adapter-name-or-path
Name or path of the QLoRA adapter.
--show-hidden-metrics-for-version
Enable deprecated Prometheus metrics that have been hidden since the specified version. For example, if a previously deprecated metric has been hidden since the v0.7.0 release, you use --show-hidden-metrics-for-version=0.7 as a temporary escape hatch while you migrate to new metrics. The metric is likely to be removed completely in an upcoming release.
--otlp-traces-endpoint
Target URL to which OpenTelemetry traces is sent.
--collects-detailed-traces
Valid choices are model, worker, all. It makes sense to set this only if --otlp-traces-endpoint is set. If set, server collects detailed traces for the specified modules. This involves use of possibly costly and or blocking operations and hence might have a performance impact.
--disable-async-output-proc
Disable async output processing. This may result in lower performance.
--scheduling-policy

The scheduling policy to use. fcfs (first come first served, requests are handled in order of arrival; default) or priority (requests are handled based on given priority, lower value means earlier handling; and time of arrival deciding any ties).

Default value: fcfs

Options: fcfs, priority

--scheduler-cls

The scheduler class to use. vllm.core.scheduler.Scheduler is the default scheduler. Can be a class directly or the path to a class of form mod.custom_class.

Default value: vllm.core.scheduler.Scheduler

--override-neuron-config
Override or set neuron device configuration, for example, {cast_logits_dtype: bloat16}.
--override-pooler-config
Override or set the pooling method for pooling models, for example, {pooling_type: mean, normalize: false}.
--compilation-config, -O
torch.compile configuration for the model. When it is a number (0, 1, 2, 3), it is interpreted as the optimizations level. NOTE:. level 0 is the default level without any optimization. level 1 and 2 are for internal testing only. level 3 is the recommended level for production. To specify the full compilation config, use a JSON string. Following the convention of traditional compilers, using -O without space is also supported. -O3 is equivalent to -O 3.
--kv-transfer-config
The configurations for distributed KV cache transfer. Should be a JSON string.
--worker-cls

The worker class to use for distributed execution.

Default value: auto

--worker-extension-cls
The worker extension class on top of the worker cls, it is useful if you just want to add new functions to the worker class without changing the existing functions.
--generation-config

The folder path to the generation config. Defaults to auto, the generation config is loaded from model paths. If set to. vllm, no generation config is loaded, AI Inference Server defaults is used. If set to a folder path, the generation config is loaded from the specified folders path. If max_new_tokens. is specified in generation config, then it sets a server-wide limit on the number of output tokens for all requests.

Default value: auto

--override-generation-config
Overrides or sets generation config in JSON format, for example, {temperature: 0.5}. If used with --generation-config=auto, the override parameters is merged with the default configs from the model. If generation-config is None, only the override parameters are used.
--enable-sleep-mode
Enable sleep mode for the engine. Only supported for CUDA platform.
--calculate-kv-scales
This enables dynamic calculation of k_scale and v_scale when kv-cache-dtype is fp8. If calculate-kv-scales is false, the scales are loaded from the model checkpoint if available. Otherwise, the scales default to 1.0.
--additional-config
Additional config for specified platform in JSON format. Different platforms may support different configs. Make sure the configs are valid for the platform you are using. The input format is like {<config_key>: <config_value>}
--enable-reasoning
Whether to enable reasoning_content for the model. If enabled, the model is able to generate reasonings content.
.--reasoning-parser

Select the reasoning parser depending on the model that you are using. This is used to parse the reasoning content into OpenAI API format. Required for --enable-reasoning.

Options: deepseek_r1

--chat-template
Pass a Jinja2 template that specifies how are roles, messages, and other chat-specific tokens are encoded in the input. For more information, see Chat Template.
--tool-call-parser
Options: deepseek_v3, granite-20b-fc, granite, hermes, internlm, jamba, llama4_json, llama3_json, mistral, phi4_mini_json, pythonic, or name registered in --tool-parser-plugin.
--cuda-graph-sizes

CUDA graph capture sizes, default is 512. If one value is provided, then the capture list would follow the pattern: [1, 2, 4] + [i for i in range(8, cuda_graph_sizes + 1, 8)] more than one value (e.g. 1 2 128) is provided, then the capture list will follow the provided list.

Default: 512

--data-parallel-address, -dpa
Address of the data parallel cluster head-node.
--data-parallel-rpc-port, -dpp
Port for data parallel RPC communication.
--data-parallel-size, -dp

Number of data parallel groups. MoE layers are sharded according to the product of the tensor parallel size and data parallel size.

Default: 1

--data-parallel-size-local, -dpl
Number of data parallel replicas to run on this node.
--disable-cascade-attn, --no-disable-cascade-attn

Disable cascade attention for V1. While cascade attention does not change the mathematical correctness, disabling it could be useful for preventing potential numerical issues. Note that even if this is set to False, cascade attention is only used when the heuristics tells that it is beneficial.

Default: False

--disable-chunked-mm-input, --no-disable-chunked-mm-input

If set to true and chunked prefill is enabled, do not partially schedule a multimodal item. Only used in V1. This ensures that if a request has a mixed prompt (for example, text tokens TTTT followed by image tokens IIIIIIIIII) where only some image tokens can be scheduled (for example, TTTTIIIII, leaving IIIII), The item is scheduled as TTTT in one step and IIIIIIIIII in the next.

Default: False

--enable-prompt-embeds, --no-enable-prompt-embeds

If True, enables passing text embeddings as inputs via the prompt_embeds key. Note that enabling this will double the time required for graph compilation.

Default: False

--enable-prompt-embeds, --no-enable-prompt-embeds

If True, enables passing text embeddings as inputs via the prompt_embeds key. Note that enabling this will double the time required for graph compilation.

Default: False

--guided-decoding-disable-additional-properties, --no-guided-decoding-disable-additional-properties

If True, the guidance backend will not use additionalProperties in the JSON schema. This is only supported for the guidance backend and is used to better align its behaviour with outlines and xgrammar.

Default: False

--guided-decoding-disable-any-whitespace, ::--no-guided-decoding-disable-any-whitespace

If True, the model will not generate any whitespace during guided decoding. This is only supported for xgrammar and guidance backends.

Default: False

--guided-decoding-disable-fallback, --no-guided-decoding-disable-fallback

If True, vLLM will not fallback to a different backend on error.

Default: False

--hf-token
The token to use as HTTP bearer authorization for remote files . If True, uses the token generated when running huggingface-cli login, stored in ~/.huggingface.
--kv-events-config
The configuration for event publishing. Should either be a valid JSON string or JSON keys passed individually.
--prefix-caching-hash-algo

Set the hash algorithm for prefix caching:

Options: builtin, sha256

  • builtin is Python’s built-in hash.
  • sha256 is collision resistant but with certain overheads.

Default: builtin

--pt-load-map-location

Map location for loading pytorch checkpoint, to support loading checkpoints can only be loaded on certain devices like cuda, this is equivalent to {": "cuda"}. Another supported format is mapping from different devices like from GPU 1 to GPU 0: {"cuda:1": "cuda:0"}. Note that when passed from command line, the strings in dictionary needs to be double quoted for json parsing. For more details, see original doc for map_location in https://pytorch.org/docs/stable/generated/torch.load.html

Default: cpu

--speculative-config
The configurations for speculative decoding. Should be a JSON string.
--ssl-keyfile
Location of your TLS private key in PEM format.

2.2. Async engine arguments

usage: vllm serve [-h] [--disable-log-requests]
Copy to Clipboard Toggle word wrap
--disable-log-requests
Disable logging requests.

Chapter 3. vLLM server usage

usage: vllm serve [-h] [--host HOST] [--port PORT]
                  [--uvicorn-log-level {debug,info,warning,error,critical,trace}]
                  [--disable-uvicorn-access-log] [--allow-credentials]
                  [--allowed-origins ALLOWED_ORIGINS]
                  [--allowed-methods ALLOWED_METHODS]
                  [--allowed-headers ALLOWED_HEADERS] [--api-key API_KEY]
                  [--lora-modules LORA_MODULES [LORA_MODULES ...]]
                  [--prompt-adapters PROMPT_ADAPTERS [PROMPT_ADAPTERS ...]]
                  [--chat-template CHAT_TEMPLATE]
                  [--chat-template-content-format {auto,string,openai}]
                  [--response-role RESPONSE_ROLE] [--ssl-keyfile SSL_KEYFILE]
                  [--ssl-certfile SSL_CERTFILE] [--ssl-ca-certs SSL_CA_CERTS]
                  [--enable-ssl-refresh] [--ssl-cert-reqs SSL_CERT_REQS]
                  [--root-path ROOT_PATH] [--middleware MIDDLEWARE]
                  [--return-tokens-as-token-ids]
                  [--disable-frontend-multiprocessing]
                  [--enable-request-id-headers] [--enable-auto-tool-choice]
                  [--tool-call-parser {deepseek_v3,granite-20b-fc,granite,hermes,internlm,jamba,llama4_json,llama3_json,mistral,phi4_mini_json,pythonic} or name registered in --tool-parser-plugin]
                  [--tool-parser-plugin TOOL_PARSER_PLUGIN] [--model MODEL]
                  [--task {auto,classify,draft,embed,embedding,generate,reward,score,transcription}]
                  [--tokenizer TOKENIZER]
                  [--tokenizer-mode {auto,custom,mistral,slow}]
                  [--trust-remote-code | --no-trust-remote-code]
                  [--dtype {auto,bfloat16,float,float16,float32,half}]
                  [--seed SEED] [--hf-config-path HF_CONFIG_PATH]
                  [--allowed-local-media-path ALLOWED_LOCAL_MEDIA_PATH]
                  [--revision REVISION] [--code-revision CODE_REVISION]
                  [--rope-scaling ROPE_SCALING] [--rope-theta ROPE_THETA]
                  [--tokenizer-revision TOKENIZER_REVISION]
                  [--max-model-len MAX_MODEL_LEN]
                  [--quantization {aqlm,awq,awq_marlin,bitblas,bitsandbytes,compressed-tensors,deepspeedfp,experts_int8,fbgemm_fp8,fp8,gguf,gptq,gptq_bitblas,gptq_marlin,gptq_marlin_24,hqq,ipex,marlin,modelopt,moe_wna16,neuron_quant,nvfp4,ptpc_fp8,qqq,quark,torchao,tpu_int8,None}]
                  [--enforce-eager | --no-enforce-eager]
                  [--max-seq-len-to-capture MAX_SEQ_LEN_TO_CAPTURE]
                  [--max-logprobs MAX_LOGPROBS]
                  [--disable-sliding-window | --no-disable-sliding-window]
                  [--disable-cascade-attn | --no-disable-cascade-attn]
                  [--skip-tokenizer-init | --no-skip-tokenizer-init]
                  [--enable-prompt-embeds | --no-enable-prompt-embeds]
                  [--served-model-name SERVED_MODEL_NAME [SERVED_MODEL_NAME ...]]
                  [--disable-async-output-proc]
                  [--config-format {auto,hf,mistral}] [--hf-token [HF_TOKEN]]
                  [--hf-overrides HF_OVERRIDES]
                  [--override-neuron-config OVERRIDE_NEURON_CONFIG]
                  [--override-pooler-config OVERRIDE_POOLER_CONFIG]
                  [--logits-processor-pattern LOGITS_PROCESSOR_PATTERN]
                  [--generation-config GENERATION_CONFIG]
                  [--override-generation-config OVERRIDE_GENERATION_CONFIG]
                  [--enable-sleep-mode | --no-enable-sleep-mode]
                  [--model-impl {auto,vllm,transformers}]
                  [--load-format {auto,pt,safetensors,npcache,dummy,tensorizer,sharded_state,gguf,bitsandbytes,mistral,runai_streamer,runai_streamer_sharded,fastsafetensors}]
                  [--download-dir DOWNLOAD_DIR]
                  [--model-loader-extra-config MODEL_LOADER_EXTRA_CONFIG]
                  [--ignore-patterns IGNORE_PATTERNS [IGNORE_PATTERNS ...]]
                  [--use-tqdm-on-load | --no-use-tqdm-on-load]
                  [--qlora-adapter-name-or-path QLORA_ADAPTER_NAME_OR_PATH]
                  [--pt-load-map-location PT_LOAD_MAP_LOCATION]
                  [--guided-decoding-backend {auto,guidance,lm-format-enforcer,outlines,xgrammar}]
                  [--guided-decoding-disable-fallback | --no-guided-decoding-disable-fallback]
                  [--guided-decoding-disable-any-whitespace | --no-guided-decoding-disable-any-whitespace]
                  [--guided-decoding-disable-additional-properties | --no-guided-decoding-disable-additional-properties]
                  [--enable-reasoning | --no-enable-reasoning]
                  [--reasoning-parser {deepseek_r1,granite,qwen3}]
                  [--distributed-executor-backend {external_launcher,mp,ray,uni,None}]
                  [--pipeline-parallel-size PIPELINE_PARALLEL_SIZE]
                  [--tensor-parallel-size TENSOR_PARALLEL_SIZE]
                  [--data-parallel-size DATA_PARALLEL_SIZE]
                  [--data-parallel-size-local DATA_PARALLEL_SIZE_LOCAL]
                  [--data-parallel-address DATA_PARALLEL_ADDRESS]
                  [--data-parallel-rpc-port DATA_PARALLEL_RPC_PORT]
                  [--enable-expert-parallel | --no-enable-expert-parallel]
                  [--max-parallel-loading-workers MAX_PARALLEL_LOADING_WORKERS]
                  [--ray-workers-use-nsight | --no-ray-workers-use-nsight]
                  [--disable-custom-all-reduce | --no-disable-custom-all-reduce]
                  [--worker-cls WORKER_CLS]
                  [--worker-extension-cls WORKER_EXTENSION_CLS]
                  [--block-size {1,8,16,32,64,128}]
                  [--gpu-memory-utilization GPU_MEMORY_UTILIZATION]
                  [--swap-space SWAP_SPACE]
                  [--kv-cache-dtype {auto,fp8,fp8_e4m3,fp8_e5m2}]
                  [--num-gpu-blocks-override NUM_GPU_BLOCKS_OVERRIDE]
                  [--enable-prefix-caching | --no-enable-prefix-caching]
                  [--prefix-caching-hash-algo {builtin,sha256}]
                  [--cpu-offload-gb CPU_OFFLOAD_GB]
                  [--calculate-kv-scales | --no-calculate-kv-scales]
                  [--tokenizer-pool-size TOKENIZER_POOL_SIZE]
                  [--tokenizer-pool-type TOKENIZER_POOL_TYPE]
                  [--tokenizer-pool-extra-config TOKENIZER_POOL_EXTRA_CONFIG]
                  [--limit-mm-per-prompt LIMIT_MM_PER_PROMPT]
                  [--mm-processor-kwargs MM_PROCESSOR_KWARGS]
                  [--disable-mm-preprocessor-cache | --no-disable-mm-preprocessor-cache]
                  [--enable-lora | --no-enable-lora]
                  [--enable-lora-bias | --no-enable-lora-bias]
                  [--max-loras MAX_LORAS] [--max-lora-rank MAX_LORA_RANK]
                  [--lora-extra-vocab-size LORA_EXTRA_VOCAB_SIZE]
                  [--lora-dtype {auto,bfloat16,float16}]
                  [--long-lora-scaling-factors LONG_LORA_SCALING_FACTORS [LONG_LORA_SCALING_FACTORS ...]]
                  [--max-cpu-loras MAX_CPU_LORAS]
                  [--fully-sharded-loras | --no-fully-sharded-loras]
                  [--enable-prompt-adapter | --no-enable-prompt-adapter]
                  [--max-prompt-adapters MAX_PROMPT_ADAPTERS]
                  [--max-prompt-adapter-token MAX_PROMPT_ADAPTER_TOKEN]
                  [--device {auto,cpu,cuda,hpu,neuron,tpu,xpu}]
                  [--speculative-config SPECULATIVE_CONFIG]
                  [--show-hidden-metrics-for-version SHOW_HIDDEN_METRICS_FOR_VERSION]
                  [--otlp-traces-endpoint OTLP_TRACES_ENDPOINT]
                  [--collect-detailed-traces {all,model,worker,None} [{all,model,worker,None} ...]]
                  [--max-num-batched-tokens MAX_NUM_BATCHED_TOKENS]
                  [--max-num-seqs MAX_NUM_SEQS]
                  [--max-num-partial-prefills MAX_NUM_PARTIAL_PREFILLS]
                  [--max-long-partial-prefills MAX_LONG_PARTIAL_PREFILLS]
                  [--cuda-graph-sizes CUDA_GRAPH_SIZES [CUDA_GRAPH_SIZES ...]]
                  [--long-prefill-token-threshold LONG_PREFILL_TOKEN_THRESHOLD]
                  [--num-lookahead-slots NUM_LOOKAHEAD_SLOTS]
                  [--scheduler-delay-factor SCHEDULER_DELAY_FACTOR]
                  [--preemption-mode {recompute,swap,None}]
                  [--num-scheduler-steps NUM_SCHEDULER_STEPS]
                  [--multi-step-stream-outputs | --no-multi-step-stream-outputs]
                  [--scheduling-policy {fcfs,priority}]
                  [--enable-chunked-prefill | --no-enable-chunked-prefill]
                  [--disable-chunked-mm-input | --no-disable-chunked-mm-input]
                  [--scheduler-cls SCHEDULER_CLS]
                  [--kv-transfer-config KV_TRANSFER_CONFIG]
                  [--kv-events-config KV_EVENTS_CONFIG]
                  [--compilation-config COMPILATION_CONFIG]
                  [--additional-config ADDITIONAL_CONFIG]
                  [--use-v2-block-manager] [--disable-log-stats]
                  [--disable-log-requests] [--max-log-len MAX_LOG_LEN]
                  [--disable-fastapi-docs] [--enable-prompt-tokens-details]
                  [--enable-server-load-tracking]
Copy to Clipboard Toggle word wrap

Chapter 4. Environment variables

You can use environment variables to configure the system-level installation, build, logging behavior of AI Inference Server.

Important

VLLM_PORT and VLLM_HOST_IP set the host ports and IP address for internal usage of AI Inference Server. It is not the port and IP address for the API server. Do not use --host $VLLM_HOST_IP and --port $VLLM_PORT to start the API server.

Important

All environment variables used by AI Inference Server are prefixed with VLLM_. If you are using Kubernetes, do not name the service vllm, otherwise environment variables set by Kubernetes might come into conflict with AI Inference Server environment variables. This is because Kubernetes sets environment variables for each service with the capitalized service name as the prefix. For more information, see Kubernetes environment variables.

Expand
Table 4.1. AI Inference Server environment variables
Environment variableDescription

VLLM_TARGET_DEVICE

Target device of vLLM, supporting cuda (by default), rocm, neuron, cpu, openvino.

MAX_JOBS

Maximum number of compilation jobs to run in parallel. By default, this is the number of CPUs.

NVCC_THREADS

Number of threads to use for nvcc. By default, this is 1. If set, MAX_JOBS will be reduced to avoid oversubscribing the CPU.

VLLM_USE_PRECOMPILED

If set, AI Inference Server uses precompiled binaries (\*.so).

VLLM_TEST_USE_PRECOMPILED_NIGHTLY_WHEEL

Whether to force using nightly wheel in Python build for testing.

CMAKE_BUILD_TYPE

CMake build type. Available options: "Debug", "Release", "RelWithDebInfo".

VERBOSE

If set, AI Inference Server prints verbose logs during installation.

VLLM_CONFIG_ROOT

Root directory for AI Inference Server configuration files.

VLLM_CACHE_ROOT

Root directory for AI Inference Server cache files.

VLLM_HOST_IP

Used in a distributed environment to determine the IP address of the current node.

VLLM_PORT

Used in a distributed environment to manually set the communication port.

VLLM_RPC_BASE_PATH

Path used for IPC when the frontend API server is running in multi-processing mode.

VLLM_USE_MODELSCOPE

If true, will load models from ModelScope instead of Hugging Face Hub.

VLLM_RINGBUFFER_WARNING_INTERVAL

Interval in seconds to log a warning message when the ring buffer is full.

CUDA_HOME

Path to cudatoolkit home directory, under which should be bin, include, and lib directories.

VLLM_NCCL_SO_PATH

Path to the NCCL library file. Needed for versions of NCCL >= 2.19 due to a bug in PyTorch.

LD_LIBRARY_PATH

Used when VLLM_NCCL_SO_PATH is not set, AI Inference Server tries to find the NCCL library in this path.

VLLM_USE_TRITON_FLASH_ATTN

Flag to control if you wantAI Inference Server to use Triton Flash Attention.

VLLM_FLASH_ATTN_VERSION

Force AI Inference Server to use a specific flash-attention version (2 or 3), only valid with the flash-attention backend.

VLLM_TEST_DYNAMO_FULLGRAPH_CAPTURE

Internal flag to enable Dynamo fullgraph capture.

LOCAL_RANK

Local rank of the process in the distributed setting, used to determine the GPU device ID.

CUDA_VISIBLE_DEVICES

Used to control the visible devices in a distributed setting.

VLLM_ENGINE_ITERATION_TIMEOUT_S

Timeout for each iteration in the engine.

VLLM_API_KEY

API key for AI Inference Server API server.

S3_ACCESS_KEY_ID

S3 access key ID for tensorizer to load model from S3.

S3_SECRET_ACCESS_KEY

S3 secret access key for tensorizer to load model from S3.

S3_ENDPOINT_URL

S3 endpoint URL for tensorizer to load model from S3.

VLLM_USAGE_STATS_SERVER

URL for AI Inference Server usage stats server.

VLLM_NO_USAGE_STATS

If true, disables collection of usage stats.

VLLM_DO_NOT_TRACK

If true, disables tracking of AI Inference Server usage stats.

VLLM_USAGE_SOURCE

Source for usage stats collection.

VLLM_CONFIGURE_LOGGING

If set to 1, AI Inference Server configures logging using the default configuration or the specified config path.

VLLM_LOGGING_CONFIG_PATH

Path to the logging configuration file.

VLLM_LOGGING_LEVEL

Default logging level for vLLM.

VLLM_LOGGING_PREFIX

If set, AI Inference Server prepends this prefix to all log messages.

VLLM_LOGITS_PROCESSOR_THREADS

Number of threads used for custom logits processors.

VLLM_TRACE_FUNCTION

If set to 1, AI Inference Server traces function calls for debugging.

VLLM_ATTENTION_BACKEND

Backend for attention computation, for example , "TORCH_SDPA", "FLASH_ATTN", "XFORMERS").

VLLM_USE_FLASHINFER_SAMPLER

If set, AI Inference Server uses the FlashInfer sampler.

VLLM_FLASHINFER_FORCE_TENSOR_CORES

Forces FlashInfer to use tensor cores; otherwise uses heuristics.

VLLM_PP_LAYER_PARTITION

Pipeline stage partition strategy.

VLLM_CPU_KVCACHE_SPACE

CPU key-value cache space (default is 4GB).

VLLM_CPU_OMP_THREADS_BIND

CPU core IDs bound by OpenMP threads.

VLLM_CPU_MOE_PREPACK

Whether to use prepack for MoE layer on unsupported CPUs.

VLLM_OPENVINO_DEVICE

OpenVINO device selection (default is CPU).

VLLM_OPENVINO_KVCACHE_SPACE

OpenVINO key-value cache space (default is 4GB).

VLLM_OPENVINO_CPU_KV_CACHE_PRECISION

Precision for OpenVINO KV cache.

VLLM_OPENVINO_ENABLE_QUANTIZED_WEIGHTS

Enables weights compression during model export by using HF Optimum.

VLLM_USE_RAY_SPMD_WORKER

Enables Ray SPMD worker for execution on all workers.

VLLM_USE_RAY_COMPILED_DAG

Uses the Compiled Graph API provided by Ray to optimize control plane overhead.

VLLM_USE_RAY_COMPILED_DAG_NCCL_CHANNEL

Enables NCCL communication in the Compiled Graph provided by Ray.

VLLM_USE_RAY_COMPILED_DAG_OVERLAP_COMM

Enables GPU communication overlap in the Compiled Graph provided by Ray.

VLLM_WORKER_MULTIPROC_METHOD

Specifies the method for multiprocess workers, for example, "fork").

VLLM_ASSETS_CACHE

Path to the cache for storing downloaded assets.

VLLM_IMAGE_FETCH_TIMEOUT

Timeout for fetching images when serving multimodal models (default is 5 seconds).

VLLM_VIDEO_FETCH_TIMEOUT

Timeout for fetching videos when serving multimodal models (default is 30 seconds).

VLLM_AUDIO_FETCH_TIMEOUT

Timeout for fetching audio when serving multimodal models (default is 10 seconds).

VLLM_MM_INPUT_CACHE_GIB

Cache size in GiB for multimodal input cache (default is 8GiB).

VLLM_XLA_CACHE_PATH

Path to the XLA persistent cache directory (only for XLA devices).

VLLM_XLA_CHECK_RECOMPILATION

If set, asserts on XLA recompilation after each execution step.

VLLM_FUSED_MOE_CHUNK_SIZE

Chunk size for fused MoE layer (default is 32768).

VLLM_NO_DEPRECATION_WARNING

If true, skips deprecation warnings.

VLLM_KEEP_ALIVE_ON_ENGINE_DEATH

If true, keeps the OpenAI API server alive even after engine errors.

VLLM_ALLOW_LONG_MAX_MODEL_LEN

Allows specifying a max sequence length greater than the default length of the model.

VLLM_TEST_FORCE_FP8_MARLIN

Forces FP8 Marlin for FP8 quantization regardless of hardware support.

VLLM_TEST_FORCE_LOAD_FORMAT

Forces a specific load format.

VLLM_RPC_TIMEOUT

Timeout for fetching response from backend server.

VLLM_PLUGINS

List of plugins to load.

VLLM_TORCH_PROFILER_DIR

Directory for saving Torch profiler traces.

VLLM_USE_TRITON_AWQ

If set, uses Triton implementations of AWQ.

VLLM_ALLOW_RUNTIME_LORA_UPDATING

If set, allows updating Lora adapters at runtime.

VLLM_SKIP_P2P_CHECK

Skips peer-to-peer capability check.

VLLM_DISABLED_KERNELS

List of quantization kernels to disable for performance comparisons.

VLLM_USE_V1

If set, uses V1 code path.

VLLM_ROCM_FP8_PADDING

Pads FP8 weights to 256 bytes for ROCm.

Q_SCALE_CONSTANT

Divisor for dynamic query scale factor calculation for FP8 KV Cache.

K_SCALE_CONSTANT

Divisor for dynamic key scale factor calculation for FP8 KV Cache.

V_SCALE_CONSTANT

Divisor for dynamic value scale factor calculation for FP8 KV Cache.

VLLM_ENABLE_V1_MULTIPROCESSING

If set, enables multiprocessing in LLM for the V1 code path.

VLLM_LOG_BATCHSIZE_INTERVAL

Time interval for logging batch size.

VLLM_SERVER_DEV_MODE

If set, AI Inference Server runs in development mode, enabling additional endpoints for debugging, for example /reset_prefix_cache).

VLLM_V1_OUTPUT_PROC_CHUNK_SIZE

Controls the maximum number of requests to handle in a single asyncio task for processing per-token outputs in the V1 AsyncLLM interface. It affects high-concurrency streaming requests.

VLLM_MLA_DISABLE

If set, AI Inference Server disables the MLA attention optimizations.

VLLM_ENABLE_MOE_ALIGN_BLOCK_SIZE_TRITON

If set, AI Inference Server uses the Triton implementation of moe_align_block_size, for example, moe_align_block_size_triton in fused_moe.py.

VLLM_RAY_PER_WORKER_GPUS

Number of GPUs per worker in Ray. Can be a fraction to allow Ray to schedule multiple actors on a single GPU.

VLLM_RAY_BUNDLE_INDICES

Specifies the indices used for the Ray bundle, for each worker. Format: comma-separated list of integers (e.g., "0,1,2,3").

VLLM_CUDART_SO_PATH

Specifies the path for the find_loaded_library() method when it may not work properly. Set by using the VLLM_CUDART_SO_PATH environment variable.

VLLM_USE_HPU_CONTIGUOUS_CACHE_FETCH

Enables contiguous cache fetching to avoid costly gather operations on Gaudi3. Only applicable to HPU contiguous cache.

VLLM_DP_RANK

Rank of the process in the data parallel setting.

VLLM_DP_SIZE

World size of the data parallel setting.

VLLM_DP_MASTER_IP

IP address of the master node in the data parallel setting.

VLLM_DP_MASTER_PORT

Port of the master node in the data parallel setting.

VLLM_CI_USE_S3

Whether to use the S3 path for model loading in CI by using RunAI Streamer.

VLLM_MARLIN_USE_ATOMIC_ADD

Whether to use atomicAdd reduce in gptq/awq marlin kernel.

VLLM_V0_USE_OUTLINES_CACHE

Whether to turn on the outlines cache for V0. This cache is unbounded and on disk, so it is unsafe for environments with malicious users.

VLLM_TPU_DISABLE_TOPK_TOPP_OPTIMIZATION

If set, disables TPU-specific optimization for top-k & top-p sampling.

Chapter 5. Viewing AI Inference Server metrics

vLLM exposes various metrics via the /metrics endpoint on the AI Inference Server OpenAI-compatible API server.

You can start the server by using Python, or using Docker.

Procedure

  1. Launch the AI Inference Server server and load your model as shown in the following example. The command also exposes the OpenAI-compatible API.

    $ vllm serve unsloth/Llama-3.2-1B-Instruct
    Copy to Clipboard Toggle word wrap
  2. Query the /metrics endpoint of the OpenAI-compatible API to get the latest metrics from the server:

    $ curl http://0.0.0.0:8000/metrics
    Copy to Clipboard Toggle word wrap

    Example output

    # HELP vllm:iteration_tokens_total Histogram of number of tokens per engine_step.
    # TYPE vllm:iteration_tokens_total histogram
    vllm:iteration_tokens_total_sum{model_name="unsloth/Llama-3.2-1B-Instruct"} 0.0
    vllm:iteration_tokens_total_bucket{le="1.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
    vllm:iteration_tokens_total_bucket{le="8.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
    vllm:iteration_tokens_total_bucket{le="16.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
    vllm:iteration_tokens_total_bucket{le="32.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
    vllm:iteration_tokens_total_bucket{le="64.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
    vllm:iteration_tokens_total_bucket{le="128.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
    vllm:iteration_tokens_total_bucket{le="256.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
    vllm:iteration_tokens_total_bucket{le="512.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
    #...
    Copy to Clipboard Toggle word wrap

Chapter 6. AI Inference Server metrics

AI Inference Server exposes vLLM metrics that you can use to monitor the health of the system.

Expand
Table 6.1. vLLM metrics
Metric NameDescription

vllm:num_requests_running

Number of requests currently running on GPU.

vllm:num_requests_waiting

Number of requests waiting to be processed.

vllm:lora_requests_info

Running stats on LoRA requests.

vllm:num_requests_swapped

Number of requests swapped to CPU. Deprecated: KV cache offloading is not used in V1.

vllm:gpu_cache_usage_perc

GPU KV-cache usage. A value of 1 means 100% usage.

vllm:cpu_cache_usage_perc

CPU KV-cache usage. A value of 1 means 100% usage. Deprecated: KV cache offloading is not used in V1.

vllm:cpu_prefix_cache_hit_rate

CPU prefix cache block hit rate. Deprecated: KV cache offloading is not used in V1.

vllm:gpu_prefix_cache_hit_rate

GPU prefix cache block hit rate. Deprecated: Use vllm:gpu_prefix_cache_queries and vllm:gpu_prefix_cache_hits in V1.

vllm:num_preemptions_total

Cumulative number of preemptions from the engine.

vllm:prompt_tokens_total

Total number of prefill tokens processed.

vllm:generation_tokens_total

Total number of generation tokens processed.

vllm:iteration_tokens_total

Histogram of the number of tokens per engine step.

vllm:time_to_first_token_seconds

Histogram of time to the first token in seconds.

vllm:time_per_output_token_seconds

Histogram of time per output token in seconds.

vllm:e2e_request_latency_seconds

Histogram of end-to-end request latency in seconds.

vllm:request_queue_time_seconds

Histogram of time spent in the WAITING phase for a request.

vllm:request_inference_time_seconds

Histogram of time spent in the RUNNING phase for a request.

vllm:request_prefill_time_seconds

Histogram of time spent in the PREFILL phase for a request.

vllm:request_decode_time_seconds

Histogram of time spent in the DECODE phase for a request.

vllm:time_in_queue_requests

Histogram of time the request spent in the queue in seconds. Deprecated: Use vllm:request_queue_time_seconds instead.

vllm:model_forward_time_milliseconds

Histogram of time spent in the model forward pass in milliseconds. Deprecated: Use prefill/decode/inference time metrics instead.

vllm:model_execute_time_milliseconds

Histogram of time spent in the model execute function in milliseconds. Deprecated: Use prefill/decode/inference time metrics instead.

vllm:request_prompt_tokens

Histogram of the number of prefill tokens processed.

vllm:request_generation_tokens

Histogram of the number of generation tokens processed.

vllm:request_max_num_generation_tokens

Histogram of the maximum number of requested generation tokens.

vllm:request_params_n

Histogram of the n request parameter.

vllm:request_params_max_tokens

Histogram of the max_tokens request parameter.

vllm:request_success_total

Count of successfully processed requests.

vllm:spec_decode_draft_acceptance_rate

Speculative token acceptance rate.

vllm:spec_decode_efficiency

Speculative decoding system efficiency.

vllm:spec_decode_num_accepted_tokens_total

Total number of accepted tokens.

vllm:spec_decode_num_draft_tokens_total

Total number of draft tokens.

vllm:spec_decode_num_emitted_tokens_total

Total number of emitted tokens.

Chapter 7. Deprecated metrics

The following metrics are deprecated and will be removed in a future version of AI Inference Server:

  • vllm:num_requests_swapped
  • vllm:cpu_cache_usage_perc
  • vllm:cpu_prefix_cache_hit_rate (KV cache offloading is not used in V1).
  • vllm:gpu_prefix_cache_hit_rate. This metric is replaced by queries+hits counters in V1.
  • vllm:time_in_queue_requests. This metric is duplicated by vllm:request_queue_time_seconds.
  • vllm:model_forward_time_milliseconds
  • vllm:model_execute_time_milliseconds. Prefill, decode or inference time metrics should be used instead.
Important

When metrics are deprecated in version X.Y, they are hidden in version X.Y+1 but can be re-enabled by using the --show-hidden-metrics-for-version=X.Y escape hatch. Deprecated metrics are completely removed in the following version X.Y+2.

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat