Configuring your model-serving platform
Configure your model-serving platform in Red Hat OpenShift AI Self-Managed
Abstract
Chapter 1. About model-serving platforms Copy linkLink copied to clipboard!
As an OpenShift AI administrator, you can enable your preferred serving platform and make it available for serving models. You can also add a custom or a tested and verified model-serving runtime.
1.1. About model serving Copy linkLink copied to clipboard!
When you serve a model, you upload a trained model into Red Hat OpenShift AI for querying, which allows you to integrate your trained models into intelligent applications.
You can upload a model to an S3-compatible object storage, persistent volume claim, or Open Container Initiative (OCI) image. You can then access and train the model from your project workbench. After training the model, you can serve or deploy the model using a model-serving platform.
Serving or deploying the model makes the model available as a service, or model runtime server, that you can access using an API. You can then access the inference endpoints for the deployed model from the dashboard and see predictions based on data inputs that you provide through API calls. Querying the model through the API is also called model inferencing.
You can also serve models on the NVIDIA NIM model serving platform. The model-serving platform that you choose depends on your business needs:
- If you want to deploy each model on its own runtime server, select the Model serving platform. The model serving platform is recommended for production use.
- If you want to use NVIDIA Inference Microservices (NIMs) to deploy a model, select the NVIDIA NIM-model serving platform.
1.1.1. Model serving platform Copy linkLink copied to clipboard!
You can deploy each model from a dedicated model server, which can help you deploy, monitor, scale, and maintain models that require increased resources. Based on the KServe component, this model serving platform is ideal for serving large models.
The model serving platform is helpful for use cases such as:
- Large language models (LLMs)
- Generative AI
For more information about setting up the model serving platform, see Installing and managing Red Hat OpenShift AI components.
1.1.2. NVIDIA NIM model serving platform Copy linkLink copied to clipboard!
You can deploy models using NVIDIA Inference Microservices (NIM) on the NVIDIA NIM model serving platform.
NVIDIA NIM, part of NVIDIA AI Enterprise, is a set of microservices designed for secure, reliable deployment of high performance AI model inferencing across clouds, data centers and workstations.
NVIDIA NIM inference services are helpful for use cases such as:
- Using GPU-accelerated containers inferencing models optimized by NVIDIA
- Deploying generative AI for virtual screening, content generation, and avatar creation
The NVIDIA NIM model serving platform is based on the model serving platform. To use the NVIDIA NIM model serving platform, you must first install the model serving platform.
For more information, see Installing and managing Red Hat OpenShift AI components.
1.2. Model-serving runtimes Copy linkLink copied to clipboard!
You can serve models on the model serving platform by using model-serving runtimes. The configuration of a model-serving runtime is defined by the ServingRuntime and InferenceService custom resource definitions (CRDs).
1.2.1. ServingRuntime Copy linkLink copied to clipboard!
The ServingRuntime CRD creates a serving runtime, an environment for deploying and managing a model. It creates the templates for pods that dynamically load and unload models of various formats and also exposes a service endpoint for inferencing requests.
The following YAML configuration is an example of the vLLM ServingRuntime for KServe model-serving runtime. The configuration includes various flags, environment variables and command-line arguments.
- 1
- The recommended accelerator to use with the runtime.
- 2
- The name with which the serving runtime is displayed.
- 3
- The endpoint used by Prometheus to scrape metrics for monitoring.
- 4
- The port used by Prometheus to scrape metrics for monitoring.
- 5
- The path to where the model files are stored in the runtime container.
- 6
- Passes the model name that is specified by the
{{.Name}}template variable inside the runtime container specification to the runtime environment. The{{.Name}}variable maps to thespec.predictor.namefield in theInferenceServicemetadata object. - 7
- The entrypoint command that starts the runtime container.
- 8
- The runtime container image used by the serving runtime. This image differs depending on the type of accelerator used.
- 9
- Specifies that the runtime is used for model serving.
- 10
- Specifies the model formats supported by the runtime.
1.2.2. InferenceService Copy linkLink copied to clipboard!
The InferenceService CRD creates a server or inference service that processes inference queries, passes it to the model, and then returns the inference output.
The inference service also performs the following actions:
- Specifies the location and format of the model.
- Specifies the serving runtime used to serve the model.
- Enables the passthrough route for gRPC or REST inference.
- Defines HTTP or gRPC endpoints for the deployed model.
The following example shows the InferenceService YAML configuration file that is generated when deploying a granite model with the vLLM runtime:
1.3. Model-serving runtimes for accelerators Copy linkLink copied to clipboard!
OpenShift AI provides support for accelerators through preinstalled model-serving runtimes.
1.3.1. NVIDIA GPUs Copy linkLink copied to clipboard!
You can serve models with NVIDIA graphics processing units (GPUs) by using the vLLM NVIDIA GPU ServingRuntime for KServe runtime. To use the runtime, you must enable GPU support in OpenShift AI. This includes installing and configuring the Node Feature Discovery Operator on your cluster. For more information, see Installing the Node Feature Discovery Operator and Enabling NVIDIA GPUs.
1.3.2. Intel Gaudi accelerators Copy linkLink copied to clipboard!
You can serve models with Intel Gaudi accelerators by using the vLLM Intel Gaudi Accelerator ServingRuntime for KServe runtime. To use the runtime, you must enable hybrid processing (HPU) support in OpenShift AI. This includes installing the Intel Gaudi Base Operator and configuring a hardware profile. For more information, see Intel Gaudi Base Operator OpenShift installation and Working with hardware profiles.
For information about recommended vLLM parameters, environment variables, supported configurations and more, see vLLM with Intel® Gaudi® AI Accelerators.
Warm-up is a model initialization and performance optimization step that is useful for reducing cold-start delays and first-inference latency. Depending on the model size, warm-up can lead to longer model loading times.
While highly recommended in production environments to avoid performance limitations, you can choose to skip warm-up for non-production environments to reduce model loading times and accelerate model development and testing cycles. To skip warm-up, follow the steps described in Customizing the parameters of a deployed model-serving runtime to add the following environment variable in the Configuration parameters section of your model deployment:
`VLLM_SKIP_WARMUP="true"`
`VLLM_SKIP_WARMUP="true"`
1.3.3. AMD GPUs Copy linkLink copied to clipboard!
You can serve models with AMD GPUs by using the vLLM AMD GPU ServingRuntime for KServe runtime. To use the runtime, you must enable support for AMD graphic processing units (GPUs) in OpenShift AI. This includes installing the AMD GPU operator and configuring a hardware profile. For more information, see Deploying the AMD GPU operator on OpenShift in the AMD documentation and Working with hardware profiles.
1.3.4. IBM Spyre AI accelerators on x86 and IBM Z Copy linkLink copied to clipboard!
Support for IBM Spyre AI Accelerators on x86 is currently available in Red Hat OpenShift AI 3.2 as a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Support for IBM Spyre AI Accelerators on s390x is currently available in Red Hat OpenShift AI 3.0 as a General Availability (GA) feature.
You can serve models with IBM Spyre AI accelerators on x86 by using the vLLM Spyre AI Accelerator ServingRuntime for KServe runtime. For IBM Z (s390x architecture), use the vLLM Spyre s390x ServingRuntime for KServe runtime. To use the runtime, you must install the Spyre Operator and configure a hardware profile. For more information, see Spyre operator image and Working with hardware profiles.
1.3.5. Supported model-serving runtimes Copy linkLink copied to clipboard!
OpenShift AI includes several preinstalled model-serving runtimes. You can use preinstalled model-serving runtimes to start serving models without modifying or defining the runtime yourself. You can also add a custom runtime to support a model.
See Supported Configurations for 3.x for a list of the supported model-serving runtimes and deployment requirements.
For help adding a custom runtime, see Adding a custom model-serving runtime.
1.3.6. Tested and verified model-serving runtimes Copy linkLink copied to clipboard!
Tested and verified runtimes are community versions of model-serving runtimes that have been tested and verified against specific versions of OpenShift AI.
Red Hat tests the current version of a tested and verified runtime each time there is a new version of OpenShift AI. If a new version of a tested and verified runtime is released in the middle of an OpenShift AI release cycle, it will be tested and verified in an upcoming release.
See Supported Configurations for 3.x for a list of tested and verified runtimes in OpenShift AI.
Tested and verified runtimes are not directly supported by Red Hat. You are responsible for ensuring that you are licensed to use any tested and verified runtimes that you add, and for correctly configuring and maintaining them.
For more information, see Tested and verified runtimes in OpenShift AI.
Additional resources
Chapter 2. Configuring model servers Copy linkLink copied to clipboard!
You configure model servers by using model-serving runtimes, which add support for a specified set of model frameworks and the model formats that they support.
2.1. Enabling the model serving platform Copy linkLink copied to clipboard!
When you have installed KServe, you can use the Red Hat OpenShift AI dashboard to enable the model serving platform. You can also use the dashboard to enable model-serving runtimes for the platform.
Prerequisites
- You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
- You have installed KServe.
Procedure
Enable the model serving platform as follows:
- In the left menu, click Settings → Cluster settings → General settings.
- Locate the Model serving platforms section.
- To enable the model serving platform for projects, select the Model serving platform checkbox.
- Click Save changes.
Enable preinstalled runtimes for the model serving platform as follows:
In the left menu of the OpenShift AI dashboard, click Settings → Model resources and operations → Serving runtimes.
The Serving runtimes page shows preinstalled runtimes and any custom runtimes that you have added.
For more information about preinstalled runtimes, see Supported runtimes.
Set the runtime that you want to use to Enabled.
The model serving platform is now available for model deployments.
2.2. Enabling speculative decoding and multi-modal inferencing Copy linkLink copied to clipboard!
You can configure the vLLM NVIDIA GPU ServingRuntime for KServe runtime to use speculative decoding, a parallel processing technique to optimize inferencing time for large language models (LLMs).
You can also configure the runtime to support inferencing for vision-language models (VLMs). VLMs are a subset of multi-modal models that integrate both visual and textual data.
The following procedure describes customizing the vLLM NVIDIA GPU ServingRuntime for KServe runtime for speculative decoding and multi-modal inferencing.
Prerequisites
- You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
- If you are using the vLLM model-serving runtime for speculative decoding with a draft model, you have stored the original model and the speculative model in the same folder within your S3-compatible object storage.
Procedure
- Follow the steps to deploy a model as described in Deploying models on the model serving platform.
- In the Serving runtime field, select the vLLM NVIDIA GPU ServingRuntime for KServe runtime.
To configure the vLLM model-serving runtime for speculative decoding by matching n-grams in the prompt, add the following arguments under Additional serving runtime arguments in the Configuration parameters section:
--speculative-model=[ngram] --num-speculative-tokens=<NUM_SPECULATIVE_TOKENS> --ngram-prompt-lookup-max=<NGRAM_PROMPT_LOOKUP_MAX> --use-v2-block-manager
--speculative-model=[ngram] --num-speculative-tokens=<NUM_SPECULATIVE_TOKENS> --ngram-prompt-lookup-max=<NGRAM_PROMPT_LOOKUP_MAX> --use-v2-block-managerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<NUM_SPECULATIVE_TOKENS>and<NGRAM_PROMPT_LOOKUP_MAX>with your own values.NoteInferencing throughput varies depending on the model used for speculating with n-grams.
To configure the vLLM model-serving runtime for speculative decoding with a draft model, add the following arguments under Additional serving runtime arguments in the Configuration parameters section:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<path_to_speculative_model>and<path_to_original_model>with the paths to the speculative model and original model on your S3-compatible object storage. -
Replace
<NUM_SPECULATIVE_TOKENS>with your own value.
-
Replace
To configure the vLLM model-serving runtime for multi-modal inferencing, add the following arguments under Additional serving runtime arguments in the Configuration parameters section:
--trust-remote-code
--trust-remote-codeCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOnly use the
--trust-remote-codeargument with models from trusted sources.- Click Deploy.
Verification
If you have configured the vLLM model-serving runtime for speculative decoding, use the following example command to verify API requests to your deployed model:
curl -v https://<inference_endpoint_url>:443/v1/chat/completions -H "Content-Type: application/json" -H "Authorization: Bearer <token>"
curl -v https://<inference_endpoint_url>:443/v1/chat/completions -H "Content-Type: application/json" -H "Authorization: Bearer <token>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you have configured the vLLM model-serving runtime for multi-modal inferencing, use the following example command to verify API requests to the vision-language model (VLM) that you have deployed:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Adding a custom model-serving runtime Copy linkLink copied to clipboard!
A model-serving runtime adds support for a specified set of model frameworks and the model formats supported by those frameworks. You can use the preinstalled runtimes that are included with OpenShift AI. You can also add your own custom runtimes if the default runtimes do not meet your needs.
As an administrator, you can use the OpenShift AI interface to add and enable a custom model-serving runtime. You can then choose the custom runtime when you deploy a model on the model serving platform.
Red Hat does not provide support for custom runtimes. You are responsible for ensuring that you are licensed to use any custom runtimes that you add, and for correctly configuring and maintaining them.
Prerequisites
- You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
- You have built your custom runtime and added the image to a container image repository such as Quay.
Procedure
From the OpenShift AI dashboard, click Settings → Model resources and operations → Serving runtimes.
The Serving runtimes page opens and shows the model-serving runtimes that are already installed and enabled.
To add a custom runtime, choose one of the following options:
- To start with an existing runtime (for example, vLLM NVIDIA GPU ServingRuntime for KServe), click the action menu (⋮) next to the existing runtime and then click Duplicate.
- To add a new custom runtime, click Add serving runtime.
- In the Select the model serving platforms this runtime supports list, select Single-model serving platform.
- In the Select the API protocol this runtime supports list, select REST or gRPC.
Optional: If you started a new runtime (rather than duplicating an existing one), add your code by choosing one of the following options:
Upload a YAML file
- Click Upload files.
In the file browser, select a YAML file on your computer.
The embedded YAML editor opens and shows the contents of the file that you uploaded.
Enter YAML code directly in the editor
- Click Start from scratch.
- Enter or paste YAML code directly in the embedded editor.
NoteIn many cases, creating a custom runtime will require adding new or custom parameters to the
envsection of theServingRuntimespecification.Click Add.
The Serving runtimes page opens and shows the updated list of runtimes that are installed. Observe that the custom runtime that you added is automatically enabled. The API protocol that you specified when creating the runtime is shown.
- Optional: To edit your custom runtime, click the action menu (⋮) and select Edit.
Verification
- The custom model-serving runtime that you added is shown in an enabled state on the Serving runtimes page.
2.4. Adding a tested and verified runtime Copy linkLink copied to clipboard!
In addition to preinstalled and custom model-serving runtimes, you can also use Red Hat tested and verified model-serving runtimes to support your requirements. For more information about Red Hat tested and verified runtimes, see Tested and verified runtimes for Red Hat OpenShift AI.
You can use the Red Hat OpenShift AI dashboard to add and enable tested and verified runtimes for the model serving platform. You can then choose the runtime when you deploy a model on the model serving platform.
Prerequisites
- You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
- If you are deploying the IBM Z Accelerated for NVIDIA Triton Inference Server runtime, you have access to IBM Cloud Container Registry to pull the container image. For more information about obtaining credentials to the IBM Cloud Container Registry, see Downloading the IBM Z Accelerated for NVIDIA Triton Inference Server container image.
- If you are deploying the IBM Power Accelerated Triton Inference Server runtime, you can access the container image from the Triton Inference Server Quay repository.
Procedure
From the OpenShift AI dashboard, click Settings → Model resources and operations → Serving runtimes.
The Serving runtimes page opens and shows the model-serving runtimes that are already installed and enabled.
- Click Add serving runtime.
- In the Select the model serving platforms this runtime supports list, select Single-model serving platform.
- In the Select the API protocol this runtime supports list, select REST or gRPC.
- Click Start from scratch.
Follow these steps to add the IBM Power Accelerated for NVIDIA Triton Inference Server runtime:
If you selected the REST API protocol, enter or paste the following YAML code directly in the embedded editor.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Follow these steps to add the IBM Z Accelerated for NVIDIA Triton Inference Server runtime:
If you selected the REST API protocol, enter or paste the following YAML code directly in the embedded editor.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you selected the gRPC API protocol, enter or paste the following YAML code directly in the embedded editor.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Follow these steps to add the NVIDIA Triton Inference Server runtime:
If you selected the REST API protocol, enter or paste the following YAML code directly in the embedded editor.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you selected the gRPC API protocol, enter or paste the following YAML code directly in the embedded editor.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Follow these steps to add the Seldon MLServer runtime:
If you selected the REST API protocol, enter or paste the following YAML code directly in the embedded editor.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you selected the gRPC API protocol, enter or paste the following YAML code directly in the embedded editor.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
In the
metadata.namefield, make sure that the value of the runtime you are adding does not match a runtime that you have already added. Optional: To use a custom display name for the runtime that you are adding, add a
metadata.annotations.openshift.io/display-namefield and specify a value, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you do not configure a custom display name for your runtime, OpenShift AI shows the value of the
metadata.namefield.Click Create.
The Serving runtimes page opens and shows the updated list of runtimes that are installed. Observe that the runtime that you added is automatically enabled. The API protocol that you specified when creating the runtime is shown.
- Optional: To edit the runtime, click the action menu (⋮) and select Edit.
Verification
- The model-serving runtime that you added is shown in an enabled state on the Serving runtimes page.
Chapter 3. Configuring model servers on the NVIDIA NIM model serving platform Copy linkLink copied to clipboard!
You configure and create a model server on the NVIDIA NIM model serving platform when you deploy an NVIDIA-optimized model. During the deployment process, you select a specific NIM from the available list and configure its properties, such as the number of replicas, server size, and the hardware profile.
3.1. Enabling the NVIDIA NIM model serving platform Copy linkLink copied to clipboard!
As an OpenShift AI administrator, you can use the Red Hat OpenShift AI dashboard to enable the NVIDIA NIM model serving platform.
If you previously enabled the NVIDIA NIM model serving platform in OpenShift AI, and then upgraded to a newer version, re-enter your NVIDIA personal API key to re-enable the NVIDIA NIM model serving platform.
Prerequisites
- You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
- You have enabled the model serving platform. You do not need to enable a preinstalled runtime. For more information about enabling the model serving platform, see Enabling the model serving platform.
The
disableNIMModelServingdashboard configuration option is set tofalse.For more information about setting dashboard configuration options, see Customizing the dashboard.
- You have enabled GPU support in OpenShift AI. This includes installing the Node Feature Discovery Operator and NVIDIA GPU Operator. For more information, see Installing the Node Feature Discovery Operator and Enabling NVIDIA GPUs.
- You have an NVIDIA Cloud Account (NCA) and can access the NVIDIA GPU Cloud (NGC) portal. For more information, see NVIDIA GPU Cloud user guide.
- Your NCA account is associated with the NVIDIA AI Enterprise Viewer role.
- You have generated a personal API key on the NGC portal. For more information, see Generating a Personal API Key.
Procedure
- In the left menu of the OpenShift AI dashboard, click Applications → Explore.
- On the Explore page, find the NVIDIA NIM tile.
- Click Enable on the application tile.
- Enter your personal API key and then click Submit.
Verification
- The NVIDIA NIM application that you enabled is displayed on the Enabled page.
Chapter 4. Customizing model deployments Copy linkLink copied to clipboard!
You can customize a model’s deployment to suit your specific needs, for example, to deploy a particular family of models or to enhance an existing deployment. You can modify the runtime configuration for a specific deployment by setting additional serving runtime arguments and environment variables.
These customizations apply only to the selected model deployment and do not change the default runtime configuration. You can set these parameters when you first deploy a model or by editing an existing deployment.
4.1. Customizing the parameters of a deployed model-serving runtime Copy linkLink copied to clipboard!
You might need additional parameters beyond the default ones to deploy specific models or to enhance an existing model deployment. In such cases, you can modify the parameters of an existing runtime to suit your deployment needs.
Customizing the parameters of a runtime only affects the selected model deployment.
Prerequisites
- You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
- You have deployed a model.
Procedure
From the OpenShift AI dashboard, click AI hub → Deployments.
The Deployments page opens.
- Click Stop next to the name of the model you want to customize.
Click the action menu (⋮) and select Edit.
The Configuration parameters section shows predefined serving runtime parameters, if any are available.
Customize the runtime parameters in the Configuration parameters section:
- Modify the values in Additional serving runtime arguments to define how the deployed model behaves.
Modify the values in Additional environment variables to define variables in the model’s environment.
NoteDo not modify the port or model serving runtime arguments, because they require specific values to be set. Overwriting these parameters can cause the deployment to fail.
- After you are done customizing the runtime parameters, click Redeploy to save.
- Click Start to deploy the model with your changes.
Verification
- Confirm that the deployed model is shown on the Deployments tab for the project, and on the Deployments page of the dashboard with a checkmark in the Status column.
Confirm that the arguments and variables that you set appear in
spec.predictor.model.argsandspec.predictor.model.envby one of the following methods:- Checking the InferenceService YAML from the OpenShift Console.
Using the following command in the OpenShift CLI:
oc get -o json inferenceservice <inferenceservicename/modelname> -n <projectname>
oc get -o json inferenceservice <inferenceservicename/modelname> -n <projectname>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2. Customizable model serving runtime parameters Copy linkLink copied to clipboard!
You can modify the parameters of an existing model serving runtime to suit your deployment needs.
For more information about parameters for each of the supported serving runtimes, see the following table:
| Serving runtime | Resource |
|---|---|
| NVIDIA Triton Inference Server | |
| OpenVINO Model Server | |
| Seldon MLServer | |
| vLLM NVIDIA GPU ServingRuntime for KServe |
vLLM: Engine Arguments |
| vLLM AMD GPU ServingRuntime for KServe |
vLLM: Engine Argument |
| vLLM Intel Gaudi Accelerator ServingRuntime for KServe |
vLLM: Engine Arguments |
4.3. Customizing the vLLM model-serving runtime Copy linkLink copied to clipboard!
In certain cases, you may need to add additional flags or environment variables to the vLLM ServingRuntime for KServe runtime to deploy a family of LLMs.
The following procedure describes customizing the vLLM model-serving runtime to deploy a Llama, Granite or Mistral model.
Prerequisites
- You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
- For Llama model deployment, you have downloaded a meta-llama-3 model to your object storage.
- For Granite model deployment, you have downloaded a granite-7b-instruct or granite-20B-code-instruct model to your object storage.
- For Mistral model deployment, you have downloaded a mistral-7B-Instruct-v0.3 model to your object storage.
- You have enabled the vLLM ServingRuntime for KServe runtime.
- You have enabled GPU support in OpenShift AI and have installed and configured the Node Feature Discovery Operator on your cluster. For more information, see Installing the Node Feature Discovery Operator and Enabling NVIDIA GPUs
Procedure
- Follow the steps to deploy a model as described in Deploying models on the model serving platform.
- In the Serving runtime field, select vLLM ServingRuntime for KServe.
If you are deploying a meta-llama-3 model, add the following arguments under Additional serving runtime arguments in the Configuration parameters section:
–-distributed-executor-backend=mp --max-model-len=6144
–-distributed-executor-backend=mp1 --max-model-len=61442 Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are deploying a granite-7B-instruct model, add the following arguments under Additional serving runtime arguments in the Configuration parameters section:
--distributed-executor-backend=mp
--distributed-executor-backend=mp1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Sets the backend to multiprocessing for distributed model workers
If you are deploying a granite-20B-code-instruct model, add the following arguments under Additional serving runtime arguments in the Configuration parameters section:
--distributed-executor-backend=mp –-tensor-parallel-size=4 --max-model-len=6448
--distributed-executor-backend=mp1 –-tensor-parallel-size=42 --max-model-len=64483 Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are deploying a mistral-7B-Instruct-v0.3 model, add the following arguments under Additional serving runtime arguments in the Configuration parameters section:
--distributed-executor-backend=mp --max-model-len=15344
--distributed-executor-backend=mp1 --max-model-len=153442 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Deploy.
Verification
- Confirm that the deployed model is shown on the Deployments tab for the project, and on the Deployments page of the dashboard with a checkmark in the Status column.
For granite models, use the following example command to verify API requests to your deployed model:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4. Setting a default cluster-wide deployment strategy Copy linkLink copied to clipboard!
You can set a default deployment strategy for new model deployments across the cluster.
Prerequisites
- You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
- You have enabled model serving on your cluster.
Procedure
- In the dashboard, navigate to Settings → Cluster settings.
- Click on the General settings tab.
- Scroll down to the Model deployment options section.
In the Default deployment strategy, select the desired cluster default:
- Rolling update
- Recreate
- Click Save changes at the bottom of the page.
Verification
- Follow the instructions to deploy a new model as described in Deploying models on the model serving platform.
- In the Advanced settings page of the deployment wizard, locate the Deployment strategy section.
- The preselected deployment strategy should match the new default you configured.