此内容没有您所选择的语言版本。

Chapter 5. Enabling the Red Hat AI Inference Server systemd Quadlet service


You can enable the Red Hat AI Inference Server systemd Quadlet service to inference serve language models with NVIDIA CUDA or AMD ROCm AI accelerators on your RHEL AI instance. After you configure the service, the service automatically starts on system boot.

Prerequisites

  • You have deployed a Red Hat Enterprise Linux AI instance with NVIDIA CUDA or AMD ROCm AI accelerators installed.
  • You are logged in as a user with sudo access.
  • You have a Hugging Face access token. You can obtain a token from Hugging Face settings.
Note

You do not need to create cache or model folders for Red Hat AI Inference Server or Red Hat AI Model Optimization Toolkit. On first boot, the following folders are created with the correct permissions for model serving:

/var/lib/rhaiis/cache
/var/lib/rhaiis/models

Procedure

  1. Open a shell prompt on the RHEL AI server.
  2. Review the images that are shipped with Red Hat Enterprise Linux AI. For example, run the following command:

    [cloud-user@localhost ~]$ podman images

    A list of shipped images is returned.

  3. Make a copy of the example configuration file:

    [cloud-user@localhost ~]$ sudo cp /etc/containers/systemd/rhaiis.container.d/install.conf.example /etc/containers/systemd/rhaiis.container.d/install.conf
  4. Edit the configuration file and update with the required parameters:

    [cloud-user@localhost ~]$ sudo vi /etc/containers/systemd/rhaiis.container.d/install.conf
    [Container]
    # Set to 1 to run in offline mode and disable model downloading at runtime.
    # Default value is 0.
    Environment=HF_HUB_OFFLINE=0
    
    # Update with the required authentication token for downloading models from Hugging Face.
    Environment=HUGGING_FACE_HUB_TOKEN=<YOUR_HUGGING_FACE_HUB_TOKEN>
    
    # Set to 1 to disable vLLM usage statistics collection. Default value is 0.
    # Environment=VLLM_NO_USAGE_STATS=1
    
    # Configure the vLLM server arguments
    Exec=--model meta-llama/Llama-3.2-1B-Instruct \
         --tensor-parallel-size 1 \
         --max-model-len 4096
    
    PublishPort=8000:8000
    ShmSize=4G
    
    [Install]
    WantedBy=multi-user.target

    Use the following table to understand the required parameters to set:

    Expand
    Table 5.1. Red Hat AI Inference Server configuration parameters
    ParameterDescription

    HF_HUB_OFFLINE

    Set to 1 to run in offline mode and disable model downloading at runtime. Default value is 0.

    HUGGING_FACE_HUB_TOKEN

    Required authentication token for downloading models from Hugging Face.

    VLLM_NO_USAGE_STATS

    Set to 1 to disable vLLM usage statistics collection. Default value is 0.

    --model

    vLLM server argument for the model identifier or local path to the model to serve, for example, meta-llama/Llama-3.2-1B-Instruct or /opt/app-root/src/models/<MODEL_NAME>.

    --tensor-parallel-size

    Number of AI accelerators to use for tensor parallelism when serving the model. Default value is 1.

    --max-model-len

    Maximum model length (context size). This depends on available AI accelerator memory. The default value is 131072, but lower values such as 4096 might be better for accelerators with less memory.

    Note

    See vLLM server arguments for the complete list of server arguments that you can configure.

  5. Reload the systemd configuration:

    [cloud-user@localhost ~]$ sudo systemctl daemon-reload
  6. Enable and start the Red Hat AI Inference Server systemd service:

    [cloud-user@localhost ~]$ sudo systemctl start rhaiis

Verification

  1. Check the service status:

    [cloud-user@localhost ~]$ sudo systemctl status rhaiis

    Example output

    ● rhaiis.service - Red Hat AI Inference Server (vLLM)
         Loaded: loaded (/etc/containers/systemd/rhaiis.container; generated)
         Active: active (running) since Wed 2025-11-12 12:19:01 UTC; 1min 22s ago
           Docs: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_ai/
        Process: 2391 ExecStartPre=/usr/libexec/rhaiis/check-lib.sh (code=exited, status=0/SUCCESS)

  2. Monitor the service logs to verify the model is loaded and vLLM server is running:

    [cloud-user@localhost ~]$ sudo podman logs -f rhaiis

    Example output

    (APIServer pid=1) INFO:     Started server process [1]
    (APIServer pid=1) INFO:     Waiting for application startup.
    (APIServer pid=1) INFO:     Application startup complete.

  3. Test the inference server API:

    [cloud-user@localhost ~]$ curl -X POST -H "Content-Type: application/json" -d '{
        "prompt": "What is the capital of France?",
        "max_tokens": 50
    }' http://localhost:8000/v1/completions | jq

    Example output

    {
      "id": "cmpl-81f99f3c28d34f99a4c2d154d6bac822",
      "object": "text_completion",
      "created": 1762952825,
      "model": "RedHatAI/granite-3.3-8b-instruct",
      "choices": [
        {
          "index": 0,
          "text": "\n\nThe capital of France is Paris.",
          "logprobs": null,
          "finish_reason": "stop",
          "stop_reason": null,
          "token_ids": null,
          "prompt_logprobs": null,
          "prompt_token_ids": null
        }
      ],
      "service_tier": null,
      "system_fingerprint": null,
      "usage": {
        "prompt_tokens": 7,
        "total_tokens": 18,
        "completion_tokens": 11,
        "prompt_tokens_details": null
      },
      "kv_transfer_params": null
    }

Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2026 Red Hat
返回顶部