Search

Chapter 2. New features and enhancements

download PDF

This section describes new features and enhancements in Red Hat OpenShift AI 2.10.

Important

This version of OpenShift AI supports using data science pipelines with DSP 2.0. If you are using OpenShift AI 2.8 and want to continue using data science pipelines with DSP 1.0, Red Hat recommends that you stay on the 2.8 version. For more information, see Support Removals.

2.1. New Features

vLLM runtime for KServe

The single-model serving platform (which uses KServe) now includes vLLM, a high-performance, specialized runtime for large language models that uses a special method called paged attention to speed up the execution of several LLMs on certain accelerators.

The vLLM runtime is compatible with the OpenAI API. The initial version of vLLM requires GPUs and is not optimized for CPU inference.

If you have configured monitoring for the single-model serving platform, you can also view vLLM runtime metrics.

Improved dashboard experience
This release introduces a new home page designed for easy access to data science projects, learning resources, and overviews of primary solution areas, making it easier to start using OpenShift AI. The new home page also provides OpenShift AI administrators with access to key functionality, simplifying product configuration. The previous home page is still accessible from the left navigation pane under Applications > Enabled.

2.2. Enhancements

Removal of internal image registry dependency
You can now create and use workbenches in OpenShift AI without enabling the internal OpenShift Container Platform image registry. If you update a cluster to enable or disable the internal image registry, you must recreate existing workbenches for the registry changes to take effect.
New workbench images for Intel AI Tools integration

The Intel AI Tools (formerly Intel oneAPI AI Analytics Toolkit) integration has been enhanced with three new workbench images, which include optimizations for popular frameworks and libraries such as PyTorch and TensorFlow. These optimizations provide improved performance on Intel hardware, helping data scientists accelerate end-to-end data science and analytics pipelines on Intel architecture.

You must install the Intel AI Tools Operator on your cluster to be able select the new workbench images.

Support for OpenShift AI Self-Managed on ROSA HCP
OpenShift AI for Self-Managed is now supported on Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (ROSA HCP).
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.