Chapter 3. Installing and deploying OpenShift AI


Important

You cannot upgrade from OpenShift AI 2.25 or any earlier version to 3.0. OpenShift AI 3.0 introduces significant technology and component changes and is intended for new installations only. To use OpenShift AI 3.0, install the Red Hat OpenShift AI Operator on a cluster running OpenShift Container Platform 4.19 or later and select the fast-3.x channel.

Support for upgrades will be available in a later release, including upgrades from OpenShift AI 2.25 to a stable 3.x version.

For more information, see the Why upgrades to OpenShift AI 3.0 are not supported Knowledgebase article.

Red Hat OpenShift AI is a platform for data scientists and developers of artificial intelligence (AI) applications. It provides a fully supported environment that lets you rapidly develop, train, test, and deploy machine learning models on-premises and/or in the public cloud.

OpenShift AI is provided as a managed cloud service add-on for Red Hat OpenShift or as self-managed software that you can install on-premise or in the public cloud on OpenShift.

For information about installing OpenShift AI as self-managed software on your OpenShift cluster in a disconnected environment, see Installing and uninstalling OpenShift AI Self-Managed in a disconnected environment. For information about installing OpenShift AI as a managed cloud service add-on, see Installing and uninstalling OpenShift AI Cloud Service.

Installing OpenShift AI involves the following high-level tasks:

  1. Confirm that your OpenShift cluster meets all requirements. See Requirements for OpenShift AI Self-Managed.
  2. Install the Red Hat OpenShift AI Operator. See Installing the Red Hat OpenShift AI Operator.
  3. Install OpenShift AI components. See Installing and managing Red Hat OpenShift AI components.
  4. Complete any additional configuration required for the components you enabled. See the component-specific configuration sections for details.
  5. Configure user and administrator groups to provide user access to OpenShift AI. See Adding users to OpenShift AI user groups.
  6. Access the OpenShift AI dashboard. See Accessing the OpenShift AI dashboard.

3.1. Requirements for OpenShift AI Self-Managed

You must meet the following requirements before you can install Red Hat OpenShift AI on your Red Hat OpenShift cluster.

3.1.1. Platform requirements

Subscriptions

  • A subscription for Red Hat OpenShift AI Self-Managed is required.
  • If you want to install OpenShift AI Self-Managed in a Red Hat-managed cloud environment, you must also have a subscription for one of the following platforms:

    • Red Hat OpenShift Dedicated on Amazon Web Services (AWS) or Google Cloud Platform (GCP)
    • Red Hat OpenShift Service on Amazon Web Services (ROSA classic)
    • Red Hat OpenShift Service on Amazon Web Services with hosted control planes (ROSA HCP)
    • Microsoft Azure Red Hat OpenShift
    • Red Hat OpenShift Kubernetes Engine (OKE)

      Note

      While OpenShift Kubernetes Engine (OKE) typically restricts the installation of certain post-installation Operators, Red Hat provides a specific licensing exception for Red Hat OpenShift AI users. This exception exclusively applies to Operators used to support Red Hat OpenShift AI workloads. Installing or using these Operators for purposes unrelated to OpenShift AI is a violation of the OKE service agreement.

Contact your Red Hat account manager to purchase new subscriptions. If you do not yet have an account manager, complete the form at https://www.redhat.com/en/contact to request one.

Cluster administrator access

  • Cluster administrator access is required to install OpenShift AI.
  • You can use an existing cluster or create a new one that meets the supported version requirements.

Supported OpenShift versions

The following OpenShift versions are supported for installing OpenShift AI:

  • OpenShift Container Platform 4.19 to 4.20. See OpenShift Container Platform installation overview.

    • To deploy models by using Distributed Inference with llm-d, your cluster must be running version 4.20 or later.
  • OpenShift Dedicated 4. See Creating an OpenShift Dedicated cluster.
  • ROSA classic 4. See Install ROSA classic clusters.
  • ROSA HCP 4. See Install ROSA with HCP clusters.
  • OpenShift Kubernetes Engine (OKE). See About OpenShift Kubernetes Engine.

    Note

    While OpenShift Kubernetes Engine (OKE) typically restricts the installation of certain post-installation Operators, Red Hat provides a specific licensing exception for Red Hat OpenShift AI users. This exception exclusively applies to Operators used to support Red Hat OpenShift AI workloads. Installing or using these Operators for purposes unrelated to OpenShift AI is a violation of the OKE service agreement.

    • The following Operators are required dependencies for Red Hat OpenShift AI 2.x and 3.x. These Operators are not supported on OKE, but can be installed if given an exception.

      Expand
      Red Hat OpenShift AI versionOperator (Unsupported, Exception Required)

      2.x

      Authorino Operator, Service Mesh Operator, Serverless Operator

      3.x

      Job-set-operator, openshift-custom-metrics-autoscaler-operator, cert-manager Operator, Leader Worker Set Operator, Red Hat Connectivity Link Operator, Kueue Operator (RHBOK), SR-IOV Operator, GPU Operator (with custom configurations), OpenTelemetry, Tempo, Cluster Observability Operator.

Important

In OpenStack, CodeReady Containers (CRC), and other private cloud environments without integrated external DNS, you must manually configure DNS A or CNAME records after installing the Operator and components, when the LoadBalancer IP becomes available. For more information, see Configuring External DNS for RHOAI 3.x on OpenStack and Private Clouds.

Cluster configuration

  • A minimum of 2 worker nodes with at least 8 CPUs and 32 GiB RAM each is required to install the Operator.
  • For single-node OpenShift clusters, the node must have at least 32 CPUs and 128 GiB RAM.
  • Additional resources are required depending on your workloads.
  • Open Data Hub must not be installed on the cluster.

Storage requirements

  • Your cluster must have a default storage class that supports dynamic provisioning. To confirm that a default storage class is configured, run the following command:

    oc get storageclass
    Copy to Clipboard Toggle word wrap

    If no storage class is marked as the default, see Changing the default storage class in the OpenShift Container Platform documentation.

Identity provider configuration

Internet access

  • Along with internet access, the following domains must be accessible during the installation of OpenShift AI:

    • cdn.redhat.com
    • subscription.rhn.redhat.com
    • registry.access.redhat.com
    • registry.redhat.io
    • quay.io
  • For environments that build or customize CUDA-based images using NVIDIA’s base images, or that directly pull artifacts from the NVIDIA NGC catalog, the following domains must also be accessible:

    • ngc.download.nvidia.cn
    • developer.download.nvidia.com
Note

Access to these NVIDIA domains is not required for standard OpenShift AI installations. The CUDA-based container images used by OpenShift AI are prebuilt and hosted on Red Hat’s registry at registry.redhat.io.

Object storage

  • Several components of OpenShift AI require or can use S3-compatible object storage, such as AWS S3, MinIO, Ceph, or IBM Cloud Storage. Object storage provides HTTP-based access to data by using the S3 API, which is the standard interface for most object storage services.
  • Object storage is required for:

    • Single-model serving platform, for storing and deploying models.
    • AI pipelines, for storing artifacts, logs, and intermediate results.
  • Object storage can also be used by:

    • Workbenches, for accessing large datasets.
    • Kueue-based workloads, for reading input data and writing output results.
    • Code executed inside pipelines, for persisting generated models or other artifacts.

Custom namespaces

  • By default, OpenShift AI uses predefined namespaces, but you can define custom namespaces for the Operator, applications, and workbenches if needed. Namespaces created by OpenShift AI typically include openshift or redhat in their name. Do not rename these system namespaces because they are required for OpenShift AI to function properly.
  • If you use custom namespaces, create and label them before installing the OpenShift AI Operator. See Configuring custom namespaces.

3.1.2. Component requirements

Meet the requirements for the components and capabilities that you plan to use.

Workbenches (workbenches)

AI Pipelines (aipipelines)

  • To store your pipeline artifacts in an S3-compatible object storage bucket so that you do not consume local storage, configure write access to your S3 bucket on your storage account.
  • If your cluster is running in FIPS mode, any custom container images for data science pipelines must be based on UBI 9 or RHEL 9. This ensures compatibility with FIPS-approved pipeline components and prevents errors related to mismatched OpenSSL or GNU C Library (glibc) versions.
  • To use your own Argo Workflows instance, after installing the OpenShift AI Operator see Configuring pipelines with your own Argo Workflows instance.

Kueue-based workloads (kueue, ray, trainingoperator)

Model serving platform (kserve)

  • Install the cert-manager Operator.

Distributed Inference with llm-d (advanced kserve)

Llama Stack and RAG workloads (llamastackoperator)

  • Install the Llama Stack Operator.
  • Install the Red Hat OpenShift Service Mesh Operator 3.x.
  • Install the cert-manager Operator.
  • Ensure you have GPU-enabled nodes available on your cluster.
  • Install the Node Feature Discovery Operator.
  • Install the NVIDIA GPU Operator.
  • Configure access to S3-compatible object storage for your model artifacts.
  • See Working with Llama Stack.

Model registry (modelregistry)

  • Configure access to an external MySQL database 5.x or later; 8.x is recommended.
  • Configure access to S3-compatible object storage.
  • See Creating a model registry.

3.2. Configuring custom namespaces

By default, OpenShift AI uses the following predefined namespaces:

  • redhat-ods-operator contains the Red Hat OpenShift AI Operator
  • redhat-ods-applications includes the dashboard and other required components of OpenShift AI
  • rhods-notebooks is where basic workbenches are deployed by default

If needed, you can define custom namespaces to use instead of the predefined ones before installing OpenShift AI. This flexibility supports environments with naming policies or conventions and allows cluster administrators to control where components such as workbenches are deployed.

Namespaces created by OpenShift AI typically include openshift or redhat in their name. Do not rename these system namespaces because they are required for OpenShift AI to function properly.

Prerequisites

  • You have access to an OpenShift AI cluster with cluster administrator privileges.
  • You have installed the OpenShift CLI (oc) as described in the appropriate documentation for your cluster:

  • You have not yet installed the Red Hat OpenShift AI Operator.

Procedure

  1. In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI (oc) as shown in the following example:

    oc login <openshift_cluster_url> -u <admin_username> -p <password>
    Copy to Clipboard Toggle word wrap
  2. Optional: To configure a custom operator namespace:

    1. Create a namespace YAML file named operator-namespace.yaml.

      apiVersion: v1
      kind: Namespace
      metadata:
        name: <operator-namespace> 
      1
      Copy to Clipboard Toggle word wrap
      1
      Defines the operator namespace.
    2. Create the namespace in your OpenShift cluster.

      $ oc create -f operator-namespace.yaml
      Copy to Clipboard Toggle word wrap

      You see output similar to the following:

      namespace/<operator-namespace> created
      Copy to Clipboard Toggle word wrap
    3. When you install the Red Hat OpenShift AI Operator, use this namespace instead of redhat-ods-operator.
  3. Optional: To configure a custom applications namespace:

    1. Create a namespace YAML file named applications-namespace.yaml.

      apiVersion: v1
      kind: Namespace
      metadata:
        name: <applications-namespace> 
      1
      
        labels:
          opendatahub.io/application-namespace: 'true' 
      2
      Copy to Clipboard Toggle word wrap
      1
      Defines the applications namespace.
      2
      Adds the required label.
    2. Create the namespace in your OpenShift cluster.

      $ oc create -f applications-namespace.yaml
      Copy to Clipboard Toggle word wrap

      You see output similar to the following:

      namespace/<applications-namespace> created
      Copy to Clipboard Toggle word wrap
  4. Optional: To configure a custom workbench namespace:

    1. Create a namespace YAML file named workbench-namespace.yaml.

      apiVersion: v1
      kind: Namespace
      metadata:
        name: <workbench-namespace> 
      1
      Copy to Clipboard Toggle word wrap
      1
      Defines the workbench namespace.
    2. Create the namespace in your OpenShift cluster.

      $ oc create -f workbench-namespace.yaml
      Copy to Clipboard Toggle word wrap

      You see output similar to the following:

      namespace/<workbench-namespace> created
      Copy to Clipboard Toggle word wrap
    3. When you install the Red Hat OpenShift AI components, specify this namespace for the spec.workbenches.workbenchNamespace field. You cannot change the default workbench namespace after you have installed the Red Hat OpenShift AI Operator.

3.3. Installing the Red Hat OpenShift AI Operator

This section shows how to install the Red Hat OpenShift AI Operator on your OpenShift cluster using the command-line interface (CLI) and the OpenShift web console.

Note

If your OpenShift cluster uses a proxy to access the Internet, you can configure the proxy settings for the Red Hat OpenShift AI Operator. See Overriding proxy settings of an Operator for more information.

The following procedure shows how to use the OpenShift CLI (oc) to install the Red Hat OpenShift AI Operator on your OpenShift cluster. You must install the Operator before you can install OpenShift AI components on the cluster.

Prerequisites

  • You have a running OpenShift cluster, version 4.19 or greater, configured with a default storage class that can be dynamically provisioned.
  • You have cluster administrator privileges for your OpenShift cluster.
  • You have installed the OpenShift CLI (oc) as described in the appropriate documentation for your cluster:

  • If you are using custom namespaces, you have created and labeled them as required.

    Note

    The example commands in this procedure use the predefined operator namespace. If you are using a custom operator namespace, replace redhat-ods-operator with your namespace.

Procedure

  1. Open a new terminal window.
  2. Follow these steps to log in to your OpenShift cluster as a cluster administrator:

    1. In the upper-right corner of the OpenShift web console, click your user name and select Copy login command.
    2. After you have logged in, click Display token.
    3. Copy the Log in with this token command and paste it in your terminal.

      $ oc login --token=<token> --server=<openshift_cluster_url>
      Copy to Clipboard Toggle word wrap
  3. Create a namespace for installation of the Operator by performing the following actions:

    Note

    If you have already created a custom namespace for the Operator, you can skip this step.

    1. Create a namespace YAML file named rhods-operator-namespace.yaml.

      apiVersion: v1
      kind: Namespace
      metadata:
        name: redhat-ods-operator 
      1
      Copy to Clipboard Toggle word wrap
      1
      Defines the operator namespace.
    2. Create the namespace in your OpenShift cluster.

      $ oc create -f rhods-operator-namespace.yaml
      Copy to Clipboard Toggle word wrap

      You see output similar to the following:

      namespace/redhat-ods-operator created
      Copy to Clipboard Toggle word wrap
  4. Create an operator group for installation of the Operator by performing the following actions:

    1. Create an OperatorGroup object custom resource (CR) file, for example, rhods-operator-group.yaml.

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: rhods-operator
        namespace: redhat-ods-operator 
      1
      Copy to Clipboard Toggle word wrap
      1
      Defines the operator namespace.
    2. Create the OperatorGroup object in your OpenShift cluster.

      $ oc create -f rhods-operator-group.yaml
      Copy to Clipboard Toggle word wrap

      You see output similar to the following:

      operatorgroup.operators.coreos.com/rhods-operator created
      Copy to Clipboard Toggle word wrap
  5. Create a subscription for installation of the Operator by performing the following actions:

    1. Create a Subscription object CR file, for example, rhods-operator-subscription.yaml.

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: rhods-operator
        namespace: redhat-ods-operator 
      1
      
      spec:
        name: rhods-operator
        channel: <channel> 
      2
      
        source: redhat-operators
        sourceNamespace: openshift-marketplace
        startingCSV: rhods-operator.x.y.z 
      3
      Copy to Clipboard Toggle word wrap
      1
      Defines the operator namespace.
      2
      Sets the update channel. You must specify a value of fast, fast-x.y, stable, stable-x.y eus-x.y, or alpha. For more information, see Understanding update channels.
      3
      Optional: Sets the operator version. If you do not specify a value, the subscription defaults to the latest operator version. For more information, see the Red Hat OpenShift AI Self-Managed Life Cycle Knowledgebase article.
    2. Create the Subscription object in your OpenShift cluster to install the Operator.

      $ oc create -f rhods-operator-subscription.yaml
      Copy to Clipboard Toggle word wrap

      You see output similar to the following:

      subscription.operators.coreos.com/rhods-operator created
      Copy to Clipboard Toggle word wrap

Verification

  • In the OpenShift web console, click Operators Installed Operators and confirm that the Red Hat OpenShift AI Operator shows one of the following statuses:

    • Installing - installation is in progress; wait for this to change to Succeeded. This might take several minutes.
    • Succeeded - installation is successful.

The following procedure shows how to use the OpenShift web console to install the Red Hat OpenShift AI Operator on your cluster. You must install the Operator before you can install OpenShift AI components on the cluster.

Prerequisites

  • You have a running OpenShift cluster, version 4.19 or greater, configured with a default storage class that can be dynamically provisioned.
  • You have cluster administrator privileges for your OpenShift cluster.
  • If you are using custom namespaces, you have created and labeled them as required.

Procedure

  1. Log in to the OpenShift web console as a cluster administrator.
  2. In the web console, click Operators OperatorHub.
  3. On the OperatorHub page, locate the Red Hat OpenShift AI Operator by scrolling through the available Operators or by typing Red Hat OpenShift AI into the Filter by keyword box.
  4. Click the Red Hat OpenShift AI tile. The Red Hat OpenShift AI information pane opens.
  5. Select a Channel. For information about subscription update channels, see Understanding update channels.
  6. Select a Version.
  7. Click Install. The Install Operator page opens.
  8. Review or change the selected channel and version as needed.
  9. For Installation mode, note that the only available value is All namespaces on the cluster (default). This installation mode makes the Operator available to all namespaces in the cluster.
  10. For Installed Namespace, choose one of the following options:

    • To use the predefined operator namespace, select the Operator recommended Namespace: redhat-ods-operator option.
    • To use the custom operator namespace that you created, select the Select a Namespace option, and then select the namespace from the drop-down list.
  11. For Update approval, select one of the following update strategies:

    • Automatic: New updates in the update channel are installed as soon as they become available.
    • Manual: A cluster administrator must approve any new updates before installation begins.

      Important

      By default, the Red Hat OpenShift AI Operator follows a sequential update process. This means that if there are several versions between the current version and the target version, Operator Lifecycle Manager (OLM) upgrades the Operator to each of the intermediate versions before it upgrades it to the final, target version.

      If you configure automatic upgrades, OLM automatically upgrades the Operator to the latest available version. If you configure manual upgrades, a cluster administrator must manually approve each sequential update between the current version and the final, target version.

      For information about supported versions, see the Red Hat OpenShift AI Life Cycle Knowledgebase article.

  12. Click Install.

    The Installing Operators pane appears. When the installation finishes, a checkmark appears next to the Operator name.

Verification

  • In the OpenShift web console, click Operators Installed Operators and confirm that the Red Hat OpenShift AI Operator shows one of the following statuses:

    • Installing - installation is in progress; wait for this to change to Succeeded. This might take several minutes.
    • Succeeded - installation is successful.

You can use the OpenShift command-line interface (CLI) or OpenShift web console to install and manage components of Red Hat OpenShift AI on your OpenShift cluster.

To install Red Hat OpenShift AI components by using the OpenShift CLI (oc), you must create and configure a DataScienceCluster object.

Important

The following procedure describes how to create and configure a DataScienceCluster object to install Red Hat OpenShift AI components as part of a new installation.

For information about changing the installation status of OpenShift AI components after installation, see Updating the installation status of Red Hat OpenShift AI components by using the web console.

Prerequisites

Procedure

  1. Open a new terminal window.
  2. Follow these steps to log in to your OpenShift cluster as a cluster administrator:

    1. In the upper-right corner of the OpenShift web console, click your user name and select Copy login command.
    2. After you have logged in, click Display token.
    3. Copy the Log in with this token command and paste it in your terminal.

      $ oc login --token=<token> --server=<openshift_cluster_url>
      Copy to Clipboard Toggle word wrap
  3. Create a DataScienceCluster object custom resource (CR) file, for example, rhods-operator-dsc.yaml.

    apiVersion: datasciencecluster.opendatahub.io/v2
    kind: DataScienceCluster
    metadata:
      name: default-dsc
    spec:
      components:
        aipipelines:
          argoWorkflowsControllers:
            managementState: Removed 
    1
    
          managementState: Removed
        dashboard:
          managementState: Removed
        feastoperator:
          managementState: Removed
        kserve:
          managementState: Removed
        kueue:
          defaultClusterQueueName: default
          defaultLocalQueueName: default
          managementState: Removed
        llamastackoperator:
          managementState: Removed
        modelregistry:
          managementState: Removed
          registriesNamespace: rhoai-model-registries
        ray:
          managementState: Removed
        trainingoperator:
          managementState: Removed
        trustyai:
          managementState: Removed
        workbenches:
          managementState: Removed
          workbenchNamespace: rhods-notebooks 
    2
    Copy to Clipboard Toggle word wrap
    1
    To use your own Argo Workflows instance with the aipipelines component, set argoWorkflowsControllers.managementState to Removed. This allows you to integrate with a managed Argo Workflows installation already on your OpenShift cluster and avoid conflicts with the embedded controller. See Configuring pipelines with your own Argo Workflows instance.
    2
    To use the predefined workbench namespace, set this value to rhods-notebooks or omit this line. To use a custom workbench namespace, set this value to your namespace.
  4. In the spec.components section of the CR, for each OpenShift AI component shown, set the value of the managementState field to either Managed or Removed. These values are defined as follows:

    Managed
    The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so.
    Removed
    The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it.
    Important
  5. Create the DataScienceCluster object in your OpenShift cluster to install the specified OpenShift AI components.

    $ oc create -f rhods-operator-dsc.yaml
    Copy to Clipboard Toggle word wrap

    You see output similar to the following:

    datasciencecluster.datasciencecluster.opendatahub.io/default created
    Copy to Clipboard Toggle word wrap

Verification

  1. Confirm that there is at least one running pod for each component:

    1. In the OpenShift web console, click Workloads Pods.
    2. In the Project list at the top of the page, select redhat-ods-applications.
    3. In the applications namespace, confirm that there are one or more running pods for each of the OpenShift AI components that you installed.
  2. Confirm the status of all installed components:

    1. In the OpenShift web console, click Operators Installed Operators.
    2. Click the Red Hat OpenShift AI Operator.
    3. Click the Data Science Cluster tab.
    4. For the DataScienceCluster object called default-dsc, verify that the status is Phase: Ready.

      Note

      When you edit the spec.components section to change the installation status of a component, the default-dsc status also changes. During the initial installation, it might take a few minutes for the status phase to change from Progressing to Ready. You can access the OpenShift AI dashboard before the default-dsc status phase is Ready, but all components might not be ready.

    5. Click the default-dsc link to display the data science cluster details.
    6. Select the YAML tab.
    7. In the status.installedComponents section, confirm that the components you installed have a status value of true.

      Note

      If a component shows with the component-name: {} format in the spec.components section of the CR, the component is not installed.

  3. In the OpenShift AI dashboard, users can view the list of the installed OpenShift AI components, their corresponding source (upstream) components, and the versions of the installed components, as described in Viewing installed OpenShift AI components.

Next steps

  • If you are using OpenStack, CodeReady Containers (CRC), or other private cloud environments without integrated external DNS, manually configure DNS A or CNAME records after the LoadBalancer IP becomes available. For more information, see Configuring External DNS for RHOAI 3.x on OpenStack and Private Clouds.
  • Complete any additional configuration required for the components you enabled. See the component-specific configuration sections for details.

To install Red Hat OpenShift AI components by using the OpenShift web console, you must create and configure a DataScienceCluster object.

Important

The following procedure describes how to create and configure a DataScienceCluster object to install Red Hat OpenShift AI components as part of a new installation.

Prerequisites

  • The Red Hat OpenShift AI Operator is installed on your OpenShift cluster. See Installing the Red Hat OpenShift AI Operator.
  • You have cluster administrator privileges for your OpenShift cluster.
  • If you are using custom namespaces, you have created the namespaces.

Procedure

  1. Log in to the OpenShift web console as a cluster administrator.
  2. In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator.
  3. Click the Data Science Cluster tab.
  4. Click Create DataScienceCluster.
  5. For Configure via, select YAML view.

    An embedded YAML editor opens showing a default custom resource (CR) for the DataScienceCluster object, similar to the following example:

    apiVersion: datasciencecluster.opendatahub.io/v2
    kind: DataScienceCluster
    metadata:
      name: default-dsc
    spec:
      components:
        aipipelines:
          argoWorkflowsControllers:
            managementState: Removed 
    1
    
          managementState: Removed
        dashboard:
          managementState: Removed
        feastoperator:
          managementState: Removed
        kserve:
          managementState: Removed
        kueue:
          defaultClusterQueueName: default
          defaultLocalQueueName: default
          managementState: Removed
        llamastackoperator:
          managementState: Removed
        modelregistry:
          managementState: Removed
          registriesNamespace: rhoai-model-registries
        ray:
          managementState: Removed
        trainingoperator:
          managementState: Removed
        trustyai:
          managementState: Removed
        workbenches:
          managementState: Removed
          workbenchNamespace: rhods-notebooks 
    2
    Copy to Clipboard Toggle word wrap
    1
    To use your own Argo Workflows instance with the aipipelines component, set argoWorkflowsControllers.managementState to Removed. This allows you to integrate with a managed Argo Workflows installation already on your OpenShift cluster and avoid conflicts with the embedded controller. See Configuring pipelines with your own Argo Workflows instance.
    2
    To use the predefined workbench namespace, set this value to rhods-notebooks or omit this line. To use a custom workbench namespace, set this value to your namespace.
  6. In the spec.components section of the CR, for each OpenShift AI component shown, set the value of the managementState field to either Managed or Removed. These values are defined as follows:

    Managed
    The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so.
    Removed
    The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it.
    Important
  7. Click Create.

Verification

  1. Confirm the status of all installed components:

    1. In the OpenShift web console, click Operators Installed Operators.
    2. Click the Red Hat OpenShift AI Operator.
    3. Click the Data Science Cluster tab.
    4. For the DataScienceCluster object called default-dsc, verify that the status is Phase: Ready.

      Note

      When you edit the spec.components section to change the installation status of a component, the default-dsc status also changes. During the initial installation, it might take a few minutes for the status phase to change from Progressing to Ready. You can access the OpenShift AI dashboard before the default-dsc status phase is Ready, but all components might not be ready.

    5. Click the default-dsc link to display the data science cluster details.
    6. Select the YAML tab.
    7. In the status.installedComponents section, confirm that the components you installed have a status value of true.

      Note

      If a component shows with the component-name: {} format in the spec.components section of the CR, the component is not installed.

  2. Confirm that there is at least one running pod for each component:

    1. In the OpenShift web console, click Workloads Pods.
    2. In the Project list at the top of the page, select redhat-ods-applications or your custom applications namespace.
    3. In the applications namespace, confirm that there are one or more running pods for each of the OpenShift AI components that you installed.
  3. In the OpenShift AI dashboard, users can view the list of the installed OpenShift AI components, their corresponding source (upstream) components, and the versions of the installed components, as described in Viewing installed OpenShift AI components.

Next steps

  • If you are using OpenStack, CodeReady Containers (CRC), or other private cloud environments without integrated external DNS, manually configure DNS A or CNAME records after the LoadBalancer IP becomes available. For more information, see Configuring External DNS for RHOAI 3.x on OpenStack and Private Clouds.
  • Complete any additional configuration required for the components you enabled. See the component-specific configuration sections for details.

You can use the OpenShift web console to update the installation status of components of Red Hat OpenShift AI on your OpenShift cluster.

Prerequisites

  • The Red Hat OpenShift AI Operator is installed on your OpenShift cluster.
  • You have cluster administrator privileges for your OpenShift cluster.

Procedure

  1. Log in to the OpenShift web console as a cluster administrator.
  2. In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator.
  3. Click the Data Science Cluster tab.
  4. On the DataScienceClusters page, click the default-dsc object.
  5. Click the YAML tab.

    An embedded YAML editor opens showing the default custom resource (CR) for the DataScienceCluster object, similar to the following example:

    apiVersion: datasciencecluster.opendatahub.io/v2
    kind: DataScienceCluster
    metadata:
      name: default-dsc
    spec:
      components:
        aipipelines:
          argoWorkflowsControllers:
            managementState: Removed
          managementState: Removed
        dashboard:
          managementState: Removed
        feastoperator:
          managementState: Removed
        kserve:
          managementState: Removed
        kueue:
          defaultClusterQueueName: default
          defaultLocalQueueName: default
          managementState: Removed
        llamastackoperator:
          managementState: Removed
        modelregistry:
          managementState: Removed
          registriesNamespace: rhoai-model-registries
        ray:
          managementState: Removed
        trainingoperator:
          managementState: Removed
        trustyai:
          managementState: Removed
        workbenches:
          managementState: Removed
          workbenchNamespace: rhods-notebooks
    Copy to Clipboard Toggle word wrap
  6. In the spec.components section of the CR, for each OpenShift AI component shown, set the value of the managementState field to either Managed or Removed. These values are defined as follows:

    Managed
    The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so.
    Removed
    The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it.
    Important
  7. Click Save.

    For any components that you updated, OpenShift AI initiates a rollout that affects all pods to use the updated image.

  8. If you are upgrading from OpenShift AI 2.19 or earlier, upgrade the Authorino Operator to the stable update channel, version 1.2.1 or later.

    1. Update Authorino to the latest available release in the tech-preview-v1 channel (1.1.2), if you have not done so already.
    2. Switch to the stable channel:

      1. Navigate to the Subscription settings of the Authorino Operator.
      2. Under Update channel, click on the highlighted tech-preview-v1.
      3. Change the channel to stable.
    3. Select the update option for Authorino 1.2.1.

Verification

  1. Confirm that there is at least one running pod for each component:

    1. In the OpenShift web console, click Workloads Pods.
    2. In the Project list at the top of the page, select redhat-ods-applications or your custom applications namespace.
    3. In the applications namespace, confirm that there are one or more running pods for each of the OpenShift AI components that you installed.
  2. Confirm the status of all installed components:

    1. In the OpenShift web console, click Operators Installed Operators.
    2. Click the Red Hat OpenShift AI Operator.
    3. Click the Data Science Cluster tab and select the DataScienceCluster object called default-dsc.
    4. Select the YAML tab.
    5. In the status.installedComponents section, confirm that the components you installed have a status value of true.

      Note

      If a component shows with the component-name: {} format in the spec.components section of the CR, the component is not installed.

  3. In the OpenShift AI dashboard, users can view the list of the installed OpenShift AI components, their corresponding source (upstream) components, and the versions of the installed components, as described in Viewing installed OpenShift AI components.

3.4.4. Viewing installed OpenShift AI components

In the Red Hat OpenShift AI dashboard, you can view a list of the installed OpenShift AI components, their corresponding source (upstream) components, and the versions of the installed components.

Prerequisites

  • OpenShift AI is installed in your OpenShift cluster.

Procedure

  1. Log in to the OpenShift AI dashboard.
  2. In the top navigation bar, click the help icon ( Help icon ) and then select About.

Verification

The About page shows a list of the installed OpenShift AI components along with their corresponding upstream components and upstream component versions.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top