Chapter 4. Customizing model deployments


You can customize a model’s deployment to suit your specific needs, for example, to deploy a particular family of models or to enhance an existing deployment. You can modify the runtime configuration for a specific deployment by setting additional serving runtime arguments and environment variables.

These customizations apply only to the selected model deployment and do not change the default runtime configuration. You can set these parameters when you first deploy a model or by editing an existing deployment.

You might need additional parameters beyond the default ones to deploy specific models or to enhance an existing model deployment. In such cases, you can modify the parameters of an existing runtime to suit your deployment needs.

Note

Customizing the parameters of a runtime only affects the selected model deployment.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
  • You have deployed a model.

Procedure

  1. From the OpenShift AI dashboard, click AI hub Deployments.

    The Deployments page opens.

  2. Click Stop next to the name of the model you want to customize.
  3. Click the action menu (⋮) and select Edit.

    The Configuration parameters section shows predefined serving runtime parameters, if any are available.

  4. Customize the runtime parameters in the Configuration parameters section:

    1. Modify the values in Additional serving runtime arguments to define how the deployed model behaves.
    2. Modify the values in Additional environment variables to define variables in the model’s environment.

      Note

      Do not modify the port or model serving runtime arguments, because they require specific values to be set. Overwriting these parameters can cause the deployment to fail.

  5. After you are done customizing the runtime parameters, click Redeploy to save.
  6. Click Start to deploy the model with your changes.

Verification

  • Confirm that the deployed model is shown on the Deployments tab for the project, and on the Deployments page of the dashboard with a checkmark in the Status column.
  • Confirm that the arguments and variables that you set appear in spec.predictor.model.args and spec.predictor.model.env by one of the following methods:

    • Checking the InferenceService YAML from the OpenShift Console.
    • Using the following command in the OpenShift CLI:

      oc get -o json inferenceservice <inferenceservicename/modelname> -n <projectname>

4.2. Customizable model serving runtime parameters

You can modify the parameters of an existing model serving runtime to suit your deployment needs.

For more information about parameters for each of the supported serving runtimes, see the following table:

Expand
Serving runtimeResource

NVIDIA Triton Inference Server

NVIDIA Triton Inference Server: Model Parameters

OpenVINO Model Server

OpenVINO Model Server Features: Dynamic Input Parameters

Seldon MLServer

MLServer Documentation: Model Settings

vLLM NVIDIA GPU ServingRuntime for KServe

vLLM: Engine Arguments
OpenAI-Compatible Server

vLLM AMD GPU ServingRuntime for KServe

vLLM: Engine Argument
OpenAI-Compatible Server

vLLM Intel Gaudi Accelerator ServingRuntime for KServe

vLLM: Engine Arguments
OpenAI-Compatible Server

4.3. Customizing the vLLM model-serving runtime

In certain cases, you may need to add additional flags or environment variables to the vLLM ServingRuntime for KServe runtime to deploy a family of LLMs.

The following procedure describes customizing the vLLM model-serving runtime to deploy a Llama, Granite or Mistral model.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
  • For Llama model deployment, you have downloaded a meta-llama-3 model to your object storage.
  • For Granite model deployment, you have downloaded a granite-7b-instruct or granite-20B-code-instruct model to your object storage.
  • For Mistral model deployment, you have downloaded a mistral-7B-Instruct-v0.3 model to your object storage.
  • You have enabled the vLLM ServingRuntime for KServe runtime.
  • You have enabled GPU support in OpenShift AI and have installed and configured the Node Feature Discovery Operator on your cluster. For more information, see Installing the Node Feature Discovery Operator and Enabling NVIDIA GPUs

Procedure

  1. Follow the steps to deploy a model as described in Deploying models on the model serving platform.
  2. In the Serving runtime field, select vLLM ServingRuntime for KServe.
  3. If you are deploying a meta-llama-3 model, add the following arguments under Additional serving runtime arguments in the Configuration parameters section:

    –-distributed-executor-backend=mp 
    1
    
    --max-model-len=6144 
    2
    1
    Sets the backend to multiprocessing for distributed model workers
    2
    Sets the maximum context length of the model to 6144 tokens
  4. If you are deploying a granite-7B-instruct model, add the following arguments under Additional serving runtime arguments in the Configuration parameters section:

    --distributed-executor-backend=mp 
    1
    1
    Sets the backend to multiprocessing for distributed model workers
  5. If you are deploying a granite-20B-code-instruct model, add the following arguments under Additional serving runtime arguments in the Configuration parameters section:

    --distributed-executor-backend=mp 
    1
    
    –-tensor-parallel-size=4 
    2
    
    --max-model-len=6448 
    3
    1
    Sets the backend to multiprocessing for distributed model workers
    2
    Distributes inference across 4 GPUs in a single node
    3
    Sets the maximum context length of the model to 6448 tokens
  6. If you are deploying a mistral-7B-Instruct-v0.3 model, add the following arguments under Additional serving runtime arguments in the Configuration parameters section:

    --distributed-executor-backend=mp 
    1
    
    --max-model-len=15344 
    2
    1
    Sets the backend to multiprocessing for distributed model workers
    2
    Sets the maximum context length of the model to 15344 tokens
  7. Click Deploy.

Verification

  • Confirm that the deployed model is shown on the Deployments tab for the project, and on the Deployments page of the dashboard with a checkmark in the Status column.
  • For granite models, use the following example command to verify API requests to your deployed model:

    curl -q -X 'POST' \
        "https://<inference_endpoint_url>:443/v1/chat/completions" \
        -H 'accept: application/json' \
        -H 'Content-Type: application/json' \
        -d "{
        \"model\": \"<model_name>\",
        \"prompt\": \"<prompt>",
        \"max_tokens\": <max_tokens>,
        \"temperature\": <temperature>
        }"
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top