Chapter 1. About model-serving platforms
As an OpenShift AI administrator, you can enable your preferred serving platform and make it available for serving models. You can also add a custom or a tested and verified model-serving runtime.
1.1. About model serving Copy linkLink copied to clipboard!
When you serve a model, you upload a trained model into Red Hat OpenShift AI for querying, which allows you to integrate your trained models into intelligent applications.
You can upload a model to an S3-compatible object storage, persistent volume claim, or Open Container Initiative (OCI) image. You can then access and train the model from your project workbench. After training the model, you can serve or deploy the model using a model-serving platform.
Serving or deploying the model makes the model available as a service, or model runtime server, that you can access using an API. You can then access the inference endpoints for the deployed model from the dashboard and see predictions based on data inputs that you provide through API calls. Querying the model through the API is also called model inferencing.
You can also serve models on the NVIDIA NIM model serving platform. The model-serving platform that you choose depends on your business needs:
- If you want to deploy each model on its own runtime server, select the Model serving platform. The model serving platform is recommended for production use.
- If you want to use NVIDIA Inference Microservices (NIMs) to deploy a model, select the NVIDIA NIM-model serving platform.
1.1.1. Model serving platform Copy linkLink copied to clipboard!
You can deploy each model from a dedicated model server, which can help you deploy, monitor, scale, and maintain models that require increased resources. Based on the KServe component, this model serving platform is ideal for serving large models.
The model serving platform is helpful for use cases such as:
- Large language models (LLMs)
- Generative AI
For more information about setting up the model serving platform, see Installing and managing Red Hat OpenShift AI components.
1.1.2. NVIDIA NIM model serving platform Copy linkLink copied to clipboard!
You can deploy models using NVIDIA Inference Microservices (NIM) on the NVIDIA NIM model serving platform.
NVIDIA NIM, part of NVIDIA AI Enterprise, is a set of microservices designed for secure, reliable deployment of high performance AI model inferencing across clouds, data centers and workstations.
NVIDIA NIM inference services are helpful for use cases such as:
- Using GPU-accelerated containers inferencing models optimized by NVIDIA
- Deploying generative AI for virtual screening, content generation, and avatar creation
The NVIDIA NIM model serving platform is based on the model serving platform. To use the NVIDIA NIM model serving platform, you must first install the model serving platform.
For more information, see Installing and managing Red Hat OpenShift AI components.
1.2. Model-serving runtimes Copy linkLink copied to clipboard!
You can serve models on the model serving platform by using model-serving runtimes. The configuration of a model-serving runtime is defined by the ServingRuntime and InferenceService custom resource definitions (CRDs).
1.2.1. ServingRuntime Copy linkLink copied to clipboard!
The ServingRuntime CRD creates a serving runtime, an environment for deploying and managing a model. It creates the templates for pods that dynamically load and unload models of various formats and also exposes a service endpoint for inferencing requests.
The following YAML configuration is an example of the vLLM ServingRuntime for KServe model-serving runtime. The configuration includes various flags, environment variables and command-line arguments.
- 1
- The recommended accelerator to use with the runtime.
- 2
- The name with which the serving runtime is displayed.
- 3
- The endpoint used by Prometheus to scrape metrics for monitoring.
- 4
- The port used by Prometheus to scrape metrics for monitoring.
- 5
- The path to where the model files are stored in the runtime container.
- 6
- Passes the model name that is specified by the
{{.Name}}template variable inside the runtime container specification to the runtime environment. The{{.Name}}variable maps to thespec.predictor.namefield in theInferenceServicemetadata object. - 7
- The entrypoint command that starts the runtime container.
- 8
- The runtime container image used by the serving runtime. This image differs depending on the type of accelerator used.
- 9
- Specifies that the runtime is used for model serving.
- 10
- Specifies the model formats supported by the runtime.
1.2.2. InferenceService Copy linkLink copied to clipboard!
The InferenceService CRD creates a server or inference service that processes inference queries, passes it to the model, and then returns the inference output.
The inference service also performs the following actions:
- Specifies the location and format of the model.
- Specifies the serving runtime used to serve the model.
- Enables the passthrough route for gRPC or REST inference.
- Defines HTTP or gRPC endpoints for the deployed model.
The following example shows the InferenceService YAML configuration file that is generated when deploying a granite model with the vLLM runtime:
1.3. Model-serving runtimes for accelerators Copy linkLink copied to clipboard!
OpenShift AI provides support for accelerators through preinstalled model-serving runtimes.
1.3.1. NVIDIA GPUs Copy linkLink copied to clipboard!
You can serve models with NVIDIA graphics processing units (GPUs) by using the vLLM NVIDIA GPU ServingRuntime for KServe runtime. To use the runtime, you must enable GPU support in OpenShift AI. This includes installing and configuring the Node Feature Discovery Operator on your cluster. For more information, see Installing the Node Feature Discovery Operator and Enabling NVIDIA GPUs.
1.3.2. Intel Gaudi accelerators Copy linkLink copied to clipboard!
You can serve models with Intel Gaudi accelerators by using the vLLM Intel Gaudi Accelerator ServingRuntime for KServe runtime. To use the runtime, you must enable hybrid processing (HPU) support in OpenShift AI. This includes installing the Intel Gaudi Base Operator and configuring a hardware profile. For more information, see Intel Gaudi Base Operator OpenShift installation and Working with hardware profiles.
For information about recommended vLLM parameters, environment variables, supported configurations and more, see vLLM with Intel® Gaudi® AI Accelerators.
Warm-up is a model initialization and performance optimization step that is useful for reducing cold-start delays and first-inference latency. Depending on the model size, warm-up can lead to longer model loading times.
While highly recommended in production environments to avoid performance limitations, you can choose to skip warm-up for non-production environments to reduce model loading times and accelerate model development and testing cycles. To skip warm-up, follow the steps described in Customizing the parameters of a deployed model-serving runtime to add the following environment variable in the Configuration parameters section of your model deployment:
`VLLM_SKIP_WARMUP="true"`
`VLLM_SKIP_WARMUP="true"`
1.3.3. AMD GPUs Copy linkLink copied to clipboard!
You can serve models with AMD GPUs by using the vLLM AMD GPU ServingRuntime for KServe runtime. To use the runtime, you must enable support for AMD graphic processing units (GPUs) in OpenShift AI. This includes installing the AMD GPU operator and configuring a hardware profile. For more information, see Deploying the AMD GPU operator on OpenShift in the AMD documentation and Working with hardware profiles.
1.3.4. IBM Spyre AI accelerators on x86 and IBM Z Copy linkLink copied to clipboard!
Support for IBM Spyre AI Accelerators on x86 is currently available in Red Hat OpenShift AI 3.2 as a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Support for IBM Spyre AI Accelerators on s390x is currently available in Red Hat OpenShift AI 3.0 as a General Availability (GA) feature.
You can serve models with IBM Spyre AI accelerators on x86 by using the vLLM Spyre AI Accelerator ServingRuntime for KServe runtime. For IBM Z (s390x architecture), use the vLLM Spyre s390x ServingRuntime for KServe runtime. To use the runtime, you must install the Spyre Operator and configure a hardware profile. For more information, see Spyre operator image and Working with hardware profiles.
1.3.5. Supported model-serving runtimes Copy linkLink copied to clipboard!
OpenShift AI includes several preinstalled model-serving runtimes. You can use preinstalled model-serving runtimes to start serving models without modifying or defining the runtime yourself. You can also add a custom runtime to support a model.
See Supported Configurations for 3.x for a list of the supported model-serving runtimes and deployment requirements.
For help adding a custom runtime, see Adding a custom model-serving runtime.
1.3.6. Tested and verified model-serving runtimes Copy linkLink copied to clipboard!
Tested and verified runtimes are community versions of model-serving runtimes that have been tested and verified against specific versions of OpenShift AI.
Red Hat tests the current version of a tested and verified runtime each time there is a new version of OpenShift AI. If a new version of a tested and verified runtime is released in the middle of an OpenShift AI release cycle, it will be tested and verified in an upcoming release.
See Supported Configurations for 3.x for a list of tested and verified runtimes in OpenShift AI.
Tested and verified runtimes are not directly supported by Red Hat. You are responsible for ensuring that you are licensed to use any tested and verified runtimes that you add, and for correctly configuring and maintaining them.
For more information, see Tested and verified runtimes in OpenShift AI.
Additional resources