Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 2. New features and enhancements

download PDF

This section describes new features and enhancements in Red Hat OpenShift AI 2.10.

Important

This version of OpenShift AI supports using data science pipelines with DSP 2.0. If you are using OpenShift AI 2.8 and want to continue using data science pipelines with DSP 1.0, Red Hat recommends that you stay on the 2.8 version. For more information, see Support Removals.

2.1. New Features

vLLM runtime for KServe

The single-model serving platform (which uses KServe) now includes vLLM, a high-performance, specialized runtime for large language models that uses a special method called paged attention to speed up the execution of several LLMs on certain accelerators.

The vLLM runtime is compatible with the OpenAI API. The initial version of vLLM requires GPUs and is not optimized for CPU inference.

If you have configured monitoring for the single-model serving platform, you can also view vLLM runtime metrics.

Improved dashboard experience
This release introduces a new home page designed for easy access to data science projects, learning resources, and overviews of primary solution areas, making it easier to start using OpenShift AI. The new home page also provides OpenShift AI administrators with access to key functionality, simplifying product configuration. The previous home page is still accessible from the left navigation pane under Applications > Enabled.

2.2. Enhancements

Removal of internal image registry dependency
You can now create and use workbenches in OpenShift AI without enabling the internal OpenShift Container Platform image registry. If you update a cluster to enable or disable the internal image registry, you must recreate existing workbenches for the registry changes to take effect.
New workbench images for Intel AI Tools integration

The Intel AI Tools (formerly Intel oneAPI AI Analytics Toolkit) integration has been enhanced with three new workbench images, which include optimizations for popular frameworks and libraries such as PyTorch and TensorFlow. These optimizations provide improved performance on Intel hardware, helping data scientists accelerate end-to-end data science and analytics pipelines on Intel architecture.

You must install the Intel AI Tools Operator on your cluster to be able select the new workbench images.

Support for OpenShift AI Self-Managed on ROSA HCP
OpenShift AI for Self-Managed is now supported on Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (ROSA HCP).
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.