Upgrading OpenShift AI Cloud Service


Red Hat OpenShift AI Cloud Service 1

Upgrade OpenShift AI on an OpenShift Dedicated or Red Hat OpenShift Service on AWS (ROSA Classic) cluster

Abstract

Upgrade OpenShift AI on an OpenShift Dedicated or Red Hat OpenShift Service on AWS (ROSA Classic) cluster.

Preface

The Red Hat OpenShift AI Add-on is automatically updated as new releases or versions become available.

Chapter 1. Overview of upgrading OpenShift AI

Red Hat OpenShift AI is automatically updated as new release or versions become available. Currently, no administrator action is necessary to trigger the process.

When an OpenShift AI upgrade occurs, you should complete the Requirements for upgrading OpenShift AI.

Notes:

  • Before you can use an accelerator in OpenShift AI, your instance must have the associated hardware profile. If your OpenShift cluster instance has an accelerator, its hardware profile is preserved after the upgrade. For more information about accelerators, see Working with accelerators.
  • Notebook images are integrated into the image stream during the upgrade and subsequently appear in the OpenShift AI dashboard. Notebook images are constructed externally; they are prebuilt images that undergo quarterly changes and they do not change with every OpenShift AI upgrade.
Important

Previously, data science pipelines in OpenShift AI were based on KubeFlow Pipelines v1. Data science pipelines are now based on KubeFlow Pipelines v2, which uses a different workflow engine. Data science pipelines 2.0 is enabled and deployed by default in OpenShift AI.

Data science pipelines 1.0 resources are no longer supported or managed by OpenShift AI. After upgrading to OpenShift AI with data science pipelines 2.0, it is no longer possible to deploy, view, or edit the details of pipelines that are based on data science pipelines 1.0 from either the dashboard or the KFP API server. If you are a current data science pipelines user, do not upgrade to OpenShift AI with data science pipelines 2.0 until you are ready to migrate to the new data science pipelines solution.

OpenShift AI does not automatically migrate existing data science pipelines 1.0 instances to 2.0. If you are upgrading to OpenShift AI with data science pipelines 2.0, you must manually migrate your existing data science pipelines 1.0 instances and update your workbenches. For more information, see Migrating to data science pipelines 2.0.

Data science pipelines 2.0 contains an installation of Argo Workflows. Red Hat does not support direct customer usage of this installation of Argo Workflows. To upgrade to OpenShift AI with data science pipelines 2.0, ensure that no separate installation of Argo Workflows exists on your cluster.

Chapter 2. Configuring the upgrade strategy for OpenShift AI

As a cluster administrator, you can configure either an automatic or manual upgrade strategy for the Red Hat OpenShift AI Operator.

Important

By default, the Red Hat OpenShift AI Operator follows a sequential update process. This means that if there are several versions between the current version and the version that you intend to upgrade to, Operator Lifecycle Manager (OLM) upgrades the Operator to each of the intermediate versions before it upgrades it to the final, target version. If you configure automatic upgrades, OLM automatically upgrades the Operator to the latest available version, without human intervention. If you configure manual upgrades, a cluster administrator must manually approve each sequential update between the current version and the final, target version.

For information about supported versions, see the Red Hat OpenShift AI Life Cycle Knowledgebase article.

Prerequisites

  • You have cluster administrator privileges for your OpenShift cluster.
  • The Red Hat OpenShift AI Operator is installed.

Procedure

  1. Log in to the OpenShift cluster web console as a cluster administrator.
  2. In the Administrator perspective, in the left menu, select OperatorsInstalled Operators.
  3. Click the Red Hat OpenShift AI Operator.
  4. Click the Subscription tab.
  5. Under Update approval, click the pencil icon and select one of the following update strategies:

    • Automatic: New updates are installed as soon as they become available.
    • Manual: A cluster administrator must approve any new update before installation begins.
  6. Click Save.

Additional resources

Chapter 3. Requirements for upgrading OpenShift AI

When upgrading OpenShift AI, you must complete the following tasks.

Check the components in the DataScienceCluster object

When you upgrade Red Hat OpenShift AI, the upgrade process automatically uses the values from the previous DataScienceCluster object.

After the upgrade, you should inspect the DataScienceCluster object and optionally update the status of any components as described in Updating the installation status of Red Hat OpenShift AI components by using the web console.

Note

New components are not automatically added to the DataScienceCluster object during upgrade. If you want to use a new component, you must manually edit the DataScienceCluster object to add the component entry.

Migrate data science pipelines

Previously, data science pipelines in OpenShift AI were based on KubeFlow Pipelines v1. Data science pipelines are now based on KubeFlow Pipelines v2, which uses a different workflow engine. Data science pipelines 2.0 is enabled and deployed by default in OpenShift AI.

Data science pipelines 1.0 resources are no longer supported or managed by OpenShift AI. It is no longer possible to deploy, view, or edit the details of pipelines that are based on data science pipelines 1.0 from either the dashboard or the KFP API server.

OpenShift AI does not automatically migrate existing data science pipelines 1.0 instances to 2.0. Before upgrading OpenShift AI, you must manually migrate your existing data science pipelines 1.0 instances. For more information, see Migrating to data science pipelines 2.0.

Important

Data science pipelines 2.0 contains an installation of Argo Workflows. Red Hat does not support direct customer usage of this installation of Argo Workflows.

If you upgrade to OpenShift AI with data science pipelines 2.0 and an Argo Workflows installation that is not installed by data science pipelines exists on your cluster, OpenShift AI components will not be upgraded. To complete the component upgrade, disable data science pipelines or remove the separate installation of Argo Workflows. The component upgrade will complete automatically.

Address KServe requirements

For the KServe component, which is used by the single-model serving platform to serve large models, you must meet the following requirements:

Update workflows interacting with OdhDashboardConfig resource

Previously, cluster administrators used the groupsConfig option in the OdhDashboardConfig resource to manage the OpenShift groups (both administrators and non-administrators) that can access the OpenShift AI dashboard. Starting with OpenShift AI 2.17, this functionality has moved to the Auth resource. If you have workflows (such as GitOps workflows) that interact with OdhDashboardConfig, you must update them to reference the Auth resource instead.

Table 3.1. User management resource update
 OpenShift AI 2.16 and earlierOpenShift AI 2.17 and later

apiVersion

opendatahub.io/v1alpha

services.platform.opendatahub.io/v1alpha1

kind

OdhDashboardConfig

Auth

name

odh-dashboard-config

auth

Admin groups

spec.groupsConfig.adminGroups

spec.adminGroups

User groups

spec.groupsConfig.allowedGroups

spec.allowedGroups

Update Kueue

In OpenShift AI, cluster administrators use Kueue to configure quota management for distributed workloads.

When upgrading from OpenShift AI 2.17 or earlier, the version of the MultiKueue Custom Resource Definitions (CRDs) changes from v1alpha1 to v1beta1.

However, if the kueue component is set to Managed, the Red Hat OpenShift AI Operator does not automatically remove the v1alpha1 MultiKueue CRDs during the upgrade. The deployment of the Kueue component then becomes blocked, as indicated in the default-dsc DataScienceCluster custom resource, where the value of the kueueReady condition remains set to False.

You can resolve this problem as follows:

Note

The MultiKueue feature is not currently supported in Red Hat OpenShift AI. If you created any resources based on the MultiKueue CRDs, those resources will be deleted when you delete the CRDs. If you do not want to lose your data, create a backup before deleting the CRDs.

  1. Log in to the OpenShift Console.
  2. In the Administrator perspective, click Administration → CustomResourceDefinitions.
  3. In the search field, enter multik.
  4. Update the MultiKueueCluster CRD as follows:

    1. Click the CRD name, and click the YAML tab.
    2. Ensure that the metadata:labels section includes the following entry:

      Copy to Clipboard Toggle word wrap
      app.opendatahub.io/kueue: 'true'
    3. Click Save.
  5. Repeat the above steps to update the MultiKueueConfig CRD.
  6. Remove the MultiKueueCluster and MultiKueueConfig CRDs, by completing the following steps for each CRD:

    1. Click the Actions menu.
    2. Click Delete CustomResourceDefinition.
    3. Click Delete to confirm the deletion.

The Red Hat OpenShift AI Operator starts the Kueue Controller, and Kueue then automatically creates the v1beta1 MultiKueue CRDs. In the default-dsc DataScienceCluster custom resource, the kueueReady condition changes to True. For information about how to check that the kueue-controller-manager-<pod-id> pod is Running, see Installing the distributed workloads components.

Chapter 4. Updating the installation status of Red Hat OpenShift AI components by using the web console

You can use the OpenShift web console to update the installation status of components of Red Hat OpenShift AI on your OpenShift cluster.

Important

If you upgraded OpenShift AI, the upgrade process automatically used the values of the previous version’s DataScienceCluster object. New components are not automatically added to the DataScienceCluster object.

After upgrading OpenShift AI:

  • Inspect the default DataScienceCluster object to check and optionally update the managementState status of the existing components.
  • Add any new components to the DataScienceCluster object.

Prerequisites

  • Red Hat OpenShift AI is installed as an Add-on to your Red Hat OpenShift cluster.
  • You have cluster administrator privileges for your OpenShift cluster.

Procedure

  1. Log in to the OpenShift web console as a cluster administrator.
  2. In the web console, click OperatorsInstalled Operators and then click the Red Hat OpenShift AI Operator.
  3. Click the Data Science Cluster tab.
  4. On the DataScienceClusters page, click the default object.
  5. Click the YAML tab.

    An embedded YAML editor opens showing the default custom resource (CR) for the DataScienceCluster object, similar to the following example:

    Copy to Clipboard Toggle word wrap
    apiVersion: datasciencecluster.opendatahub.io/v1
    kind: DataScienceCluster
    metadata:
      name: default-dsc
    spec:
      components:
        codeflare:
          managementState: Removed
        dashboard:
          managementState: Removed
        datasciencepipelines:
          managementState: Removed
        kserve:
          managementState: Removed
        kueue:
          managementState: Removed
        modelmeshserving:
          managementState: Removed
        ray:
          managementState: Removed
        trainingoperator:
          managementState: Removed
        trustyai:
          managementState: Removed
        workbenches:
          managementState: Removed
  6. In the spec.components section of the CR, for each OpenShift AI component shown, set the value of the managementState field to either Managed or Removed. These values are defined as follows:

    Managed
    The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so.
    Removed
    The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it.
    Important
  7. Click Save.

    For any components that you updated, OpenShift AI initiates a rollout that affects all pods to use the updated image.

  8. If you are upgrading from OpenShift AI 2.19 or earlier, upgrade the Authorino Operator to the stable update channel, version 1.2.1 or later.

    1. Update Authorino to the latest available release in the tech-preview-v1 channel (1.1.2), if you have not done so already.
    2. Switch to the stable channel:

      1. Navigate to the Subscription settings of the Authorino Operator.
      2. Under Update channel, click on the highlighted tech-preview-v1.
      3. Change the channel to stable.
    3. Select the update option for Authorino 1.2.1.

Verification

  • Confirm that there is a running pod for each component:

    1. In the OpenShift web console, click WorkloadsPods.
    2. In the Project list at the top of the page, select redhat-ods-applications.
    3. In the applications namespace, confirm that there are running pods for each of the OpenShift AI components that you installed.
  • Confirm the status of all installed components:

    1. In the OpenShift web console, click OperatorsInstalled Operators.
    2. Click the Red Hat OpenShift AI Operator.
    3. Click the Data Science Cluster tab and select the DataScienceCluster object called default-dsc.
    4. Select the YAML tab.
    5. In the installedComponents section, confirm that the components you installed have a status value of true.

      Note

      If a component shows with the component-name: {} format in the spec.components section of the CR, the component is not installed.

  • In the Red Hat OpenShift AI dashboard, users can view the list of the installed OpenShift AI components, their corresponding source (upstream) components, and the versions of the installed components, as described in Viewing installed components.

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat, Inc.