Questo contenuto non è disponibile nella lingua selezionata.

Chapter 3. Requirements for upgrading OpenShift AI


When upgrading OpenShift AI in a disconnected environment, you must complete the following tasks.

Check the components in the DataScienceCluster object

When you upgrade Red Hat OpenShift AI, the upgrade process automatically uses the values from the previous DataScienceCluster object.

After the upgrade, you should inspect the DataScienceCluster object and optionally update the status of any components as described in Updating the installation status of Red Hat OpenShift AI components by using the web console.

Note

New components are not automatically added to the DataScienceCluster object during upgrade. If you want to use a new component, you must manually edit the DataScienceCluster object to add the component entry.

Note

If you are upgrading OpenShift AI on a cluster running in FIPS mode, any custom container images for data science pipelines must be based on UBI 9 or RHEL 9. This ensures compatibility with FIPS-approved pipeline components and prevents errors related to mismatched OpenSSL or GNU C Library (glibc) versions.

Migrate from embedded Kueue to Red Hat build of Kueue

The embedded Kueue component for managing distributed workloads is deprecated. OpenShift AI now uses the Red Hat build of Kueue Operator to provide enhanced workload scheduling for distributed training, workbench, and model serving workloads.

Before upgrading OpenShift AI, check if your environment is using the embedded Kueue component by verifying the spec.components.kueue.managementState field in the DataScienceCluster custom resource. If the field is set to Managed, you must complete the migration to the Red Hat build of Kueue Operator to avoid controller conflicts and ensure continued support for queue-based workloads.

Important

As part of the migration to Red Hat build of Kueue, you must manually delete the following legacy Kueue CRDs:

  • cohorts.kueue.x-k8s.io/v1alpha1
  • topologies.kueue.x-k8s.io/v1alpha1

If you have existing instances of these CRDs, you must manually back up their data, delete the instances, and recreate them using the v1beta1 API after the upgrade. If you do not complete these steps, the Kueue Operator enters a failed reconciliation loop, resulting in a Not Ready status for the DataScienceCluster. To avoid this conflict, ensure no active workloads depend on the legacy Kueue resources.

For more information, see Red Hat Build of Kueue 1.2 installation or upgrade fails with Kueue CRD reconciliation error.

This migration requires OpenShift 4.18 or later. For more information, see Migrating to the Red Hat build of Kueue Operator.

Address KServe requirements

For the KServe component, which is used by the single-model serving platform to serve large models, you must meet the following requirements:

  • To fully install and use KServe, you must also install Operators for Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh and perform additional configuration. For more information, see Serving large models.
  • If you want to add an authorization provider for the single-model serving platform, you must install the Red Hat - Authorino Operator. For more information, see Adding an authorization provider for the single-model serving platform.
  • If you have not enabled the KServe component (that is, you set the value of the managementState field to Removed in the DataScienceCluster object), you must also disable the dependent Service Mesh component to avoid errors. See Disabling KServe dependencies.

Address RAG dependencies

If you plan to deploy Retrieval-Augmented Generation (RAG) workloads by using Llama Stack, you must meet the following requirements:

  • You have GPU-enabled nodes available on your cluster and you have installed the Node Feature Discovery Operator and NVIDIA GPU Operator. For more information, see Installing the Node Feature Discovery Operator and Enabling NVIDIA GPUs.
  • You have access to storage for your model artifacts.
  • You have met the KServe installation prerequisites.

Verify Argo Workflows compatibility

If you use your own Argo Workflows instance for pipelines, verify that the installed version is compatible with this release of OpenShift AI. For details, see Supported Configurations.

Update workflows interacting with OdhDashboardConfig resource

Previously, cluster administrators used the groupsConfig option in the OdhDashboardConfig resource to manage the OpenShift groups (both administrators and non-administrators) that can access the OpenShift AI dashboard. Starting with OpenShift AI 2.17, this functionality has moved to the Auth resource. If you have workflows (such as GitOps workflows) that interact with OdhDashboardConfig, you must update them to reference the Auth resource instead.

Expand
Table 3.1. User management resource update
 OpenShift AI 2.16 and earlierOpenShift AI 2.17 and later

apiVersion

opendatahub.io/v1alpha

services.platform.opendatahub.io/v1alpha1

kind

OdhDashboardConfig

Auth

name

odh-dashboard-config

auth

Admin groups

spec.groupsConfig.adminGroups

spec.adminGroups

User groups

spec.groupsConfig.allowedGroups

spec.allowedGroups

Check the status of certificate management

You can use self-signed certificates in OpenShift AI.

After you upgrade, check the management status for Certificate Authority (CA) bundles as described in Working with certificates.

Red Hat logoGithubredditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi. Esplora i nostri ultimi aggiornamenti.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita il Blog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

Theme

© 2026 Red Hat
Torna in cima