Chapter 2. New features and enhancements
This section describes new features and enhancements in Red Hat OpenShift AI.
2.1. New features
- Support for AMD GPUs
- The AMD ROCm workbench image adds support for the AMD graphics processing units (GPU) Operator, significantly boosting the processing performance for compute-intensive activities. This feature provides you with access to drivers, development tools, and APIs that support AI workloads and a wide range of models. Additionally, the AMD ROCm workbench image includes machine learning libraries to support AI frameworks such as TensorFlow and PyTorch. The feature also provides access to images that can be used to explore serving and training or tuning use cases with AMD GPUs.
- NVIDIA NIM model serving platform
With the NVIDIA NIM model serving platform, you can deploy NVIDIA optimized models using NVIDIA NIM inference services in OpenShift AI. NVIDIA NIM, part of NVIDIA AI Enterprise, is a set of easy-to-use microservices designed for secure, reliable deployment of high-performance AI model inferencing across the cloud, data center and workstations. NVIDIA NIM supports a wide range of AI models, including open-source community and NVIDIA AI Foundation models, ensuring seamless, scalable AI inferencing, on-premises or in the cloud, leveraging industry standard APIs.
For more information, see About the NVIDIA NIM model serving platform.
- Support for Intel Gaudi 3 accelerators
- Support for Intel Gaudi 3 accelerators is now available. The vLLM ServingRuntime with Gaudi accelerators support for KServe runtime is a high-throughput and memory-efficient inference and serving runtime that supports Intel Gaudi accelerators. For more information, see Deploying models on the single-model serving platform.
- Language Model Evaluation as a Service
The new orchestrator enables the deployment of a secure, scalable Language Model Evaluation as a Service (LM-Eval-aaS). Leveraging open-source tools, this service integrates the lm-evaluation-harness with Unitxt task cards for efficient and secure model evaluation using industry-standard and proprietary benchmarks.
LM-Eval-aaS includes the following key features:
- Orchestrator Deployment Assets: Initial assets to deploy and manage the LM-Eval-aaS orchestrator.
- Task Card Integration: Support for Unitxt task cards to define custom preprocessing and evaluation workflows.
- Benchmarking Support: Compatibility with both standard and proprietary evaluation benchmarks.
For more information, see Evaluating large language models.
- Customizable serving runtime parameters
- You can now pass parameter values and environment variables to your runtimes when serving a model. Customization of the runtime parameters is particularly important in GenAI use cases involving vLLM.
- Support for deploying quantized models
-
You can use the vLLM ServingRuntime for KServe runtime to deploy models that are quantized for the Marlin kernel. If your model is quantized for Marlin, vLLM automatically uses the Marlin kernel based on the underlying hardware. For other quantized models, you can use the
--quantization=marlin
custom parameter. For information about supported hardware, see Supported Hardware for Quantization Kernels on the vLLM website. - code-server workbench image
The code-server workbench image included in Red Hat OpenShift AI, previously available as a Technology Preview feature, is now generally available. For more information, see Working in code-server.
With the code-server workbench image, you can customize your workbench environment by using a variety of extensions to add new languages, themes, debuggers, and connect to additional services. You can also enhance the efficiency of your data science work with syntax highlighting, auto-indentation, and bracket matching.
NoteElyra-based pipelines are not available with the code-server workbench image.
2.2. Enhancements
- Custom connection types
- Administrators can use the enhanced connections feature to configure custom connections to data sources such as databases, making it easier for users to access data for model development. Additionally, users can access models from repositories such as Hugging Face for model serving, thanks to the built-in connection type for URI-based repositories.
- NVIDIA Triton Inference Server version 24.10 runtime: additional models tested and verified
The NVIDIA Triton Inference Server version 24.10 runtime has been tested with the following models for both KServe (REST and gRPC) and ModelMesh (REST):
- Forest Inference Library (FIL)
- Python
- TensorRT
- Distributed workloads: additional training images tested and verified
Several additional training images are tested and verified:
ROCm-compatible KFTO cluster image
A new ROCm-compatible KFTO cluster image,
quay.io/modh/training:py311-rocm61-torch241
, is tested and verified. This image is compatible with AMD accelerators that are supported by ROCm 6.1.ROCm-compatible Ray cluster images
The ROCm-compatible Ray cluster images
quay.io/modh/ray:2.35.0-py39-rocm61
andquay.io/modh/ray:2.35.0-py311-rocm61
, previously available as a Developer Preview feature, are tested and verified. These images are compatible with AMD accelerators that are supported by ROCm 6.1.CUDA-compatible KFTO image
The CUDA-compatible KFTO cluster image, previously available as a Developer Preview feature, is tested and verified. The image is now available in a new location:
quay.io/modh/training:py311-cuda121-torch241
. This image is compatible with NVIDIA GPUs that are supported by CUDA 12.1.
These images are AMD64 images, which might not work on other architectures. For more information about the latest available training images in Red Hat OpenShift AI, see Red Hat OpenShift AI Supported Configurations.