此内容没有您所选择的语言版本。

Chapter 3. Deploying OpenShift AI in a disconnected environment


Read this section to understand how to deploy Red Hat OpenShift AI as a development and testing environment for data scientists in a disconnected environment. Disconnected clusters are on a restricted network, typically behind a firewall. In this case, clusters cannot access the remote registries where Red Hat provided OperatorHub sources reside. Instead, the Red Hat OpenShift AI Operator can be deployed to a disconnected environment using a private registry to mirror the images.

Installing OpenShift AI in a disconnected environment involves the following high-level tasks:

  1. Confirm that your OpenShift cluster meets all requirements. See Requirements for OpenShift AI Self-Managed.
  2. Mirror images to a private registry. See Mirroring images to a private registry for a disconnected installation.
  3. Install the Red Hat OpenShift AI Operator. See Installing the Red Hat OpenShift AI Operator.
  4. Install OpenShift AI components. See Installing and managing Red Hat OpenShift AI components.
  5. Configure user and administrator groups to provide user access to OpenShift AI. See Adding users to OpenShift AI user groups.
  6. Provide your users with the URL for the OpenShift cluster on which you deployed OpenShift AI. See Accessing the OpenShift AI dashboard.
  7. Optionally, configure and enable your accelerators in OpenShift AI to ensure that your data scientists can use compute-heavy workloads in their models. See Enabling accelerators.

3.1. Requirements for OpenShift AI Self-Managed

You must meet the following requirements before you can install Red Hat OpenShift AI on your Red Hat OpenShift cluster in a disconnected environment:

Product subscriptions

  • You must have a subscription for Red Hat OpenShift AI Self-Managed.

    Contact your Red Hat account manager to purchase new subscriptions. If you do not yet have an account manager, complete the form at https://www.redhat.com/en/contact to request one.

Cluster administrator access to your OpenShift cluster

  • You must have an OpenShift cluster with cluster administrator access. Use an existing cluster or create a cluster by following the OpenShift Container Platform documentation: Installing a cluster in a disconnected environment.
  • After you install a cluster, configure the Cluster Samples Operator by following the OpenShift Container Platform documentation: Configuring Samples Operator for a restricted cluster.
  • Your cluster must have at least 2 worker nodes with at least 8 CPUs and 32 GiB RAM available for OpenShift AI to use when you install the Operator. To ensure that OpenShift AI is usable, additional cluster resources are required beyond the minimum requirements.
  • To use OpenShift AI on single node OpenShift, the node has to have at least 32 CPUs and 128 GiB RAM.
  • Your cluster is configured with a default storage class that can be dynamically provisioned.

    Confirm that a default storage class is configured by running the oc get storageclass command. If no storage classes are noted with (default) beside the name, follow the OpenShift Container Platform documentation to configure a default storage class: Changing the default storage class. For more information about dynamic provisioning, see Dynamic provisioning.

  • Open Data Hub must not be installed on the cluster.

For more information about managing the machines that make up an OpenShift cluster, see Overview of machine management.

An identity provider configured for OpenShift

Internet access on the mirroring machine

  • Along with Internet access, the following domains must be accessible to mirror images required for the OpenShift AI Self-Managed installation:

    • cdn.redhat.com
    • subscription.rhn.redhat.com
    • registry.access.redhat.com
    • registry.redhat.io
    • quay.io
  • For environments that build or customize CUDA-based images using NVIDIA’s base images, or that directly pull artifacts from the NVIDIA NGC catalog, the following domains must also be accessible:

    • ngc.download.nvidia.cn
    • developer.download.nvidia.com
Note

Access to these NVIDIA domains is not required for standard OpenShift AI Self-Managed installations. The CUDA-based container images used by OpenShift AI are prebuilt and hosted on Red Hat’s registry at registry.redhat.io.

Create custom namespaces

  • By default, OpenShift AI uses predefined namespaces, but you can define custom namespaces for the operator, applications, and workbenches if needed. Namespaces created by OpenShift AI typically include openshift or redhat in their name. Do not rename these system namespaces because they are required for OpenShift AI to function properly. If you are using custom namespaces, before installing the OpenShift AI Operator, you must have created and labeled them as required.
  • Before you can execute a pipeline in a disconnected environment, you must upload the images to your private registry. For more information, see Mirroring images to run pipelines in a restricted environment.
  • You can store your pipeline artifacts in an S3-compatible object storage bucket so that you do not consume local storage. To do this, you must first configure write access to your S3 bucket on your storage account.
  • If you are installing OpenShift AI on a cluster running in FIPS mode, any custom container images for data science pipelines must be based on UBI 9 or RHEL 9. This ensures compatibility with FIPS-approved pipeline components and prevents errors related to mismatched OpenSSL or GNU C Library (glibc) versions.

Install KServe dependencies

Install RAG dependencies

If you plan to deploy Retrieval-Augmented Generation (RAG) workloads by using Llama Stack, you must meet the following requirements:

  • You have GPU-enabled nodes available on your cluster and you have installed the Node Feature Discovery Operator and NVIDIA GPU Operator. For more information, see Installing the Node Feature Discovery Operator and Enabling NVIDIA GPUs.
  • You have access to storage for your model artifacts.
  • You have met the KServe installation prerequisites.

Access to object storage

You can install the Red Hat OpenShift AI Operator to your OpenShift cluster in a disconnected environment by mirroring the required container images to a private container registry. After mirroring the images to a container registry, you can install Red Hat OpenShift AI Operator by using OperatorHub.

You can use the mirror registry for Red Hat OpenShift, a small-scale container registry, as a target for mirroring the required container images for OpenShift AI in a disconnected environment. Using the mirror registry for Red Hat OpenShift is optional if another container registry is already available in your installation environment.

Prerequisites

  • You have cluster administrator access to a running OpenShift Container Platform cluster, version 4.16 or greater.
  • You have credentials for Red Hat OpenShift Cluster Manager (https://console.redhat.com/openshift/).
  • Your mirroring machine is running Linux, has 100 GB of space available, and has access to the Internet so that it can obtain the images to populate the mirror repository.
  • You have installed the OpenShift CLI (oc).
  • You have reviewed the component requirements and identified all operators you must mirror in addition to the Red Hat OpenShift AI Operator. See Requirements for OpenShift AI Self-Managed. For example:

    • If you plan to use NVIDIA GPUs, you must mirror deployed the NVIDIA GPU Operator. See Configuring the NVIDIA GPU Operator in the OpenShift Container Platform documentation.
    • If you plan to use the single-model serving platform to serve large models, you must mirror the Operators for Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh.
    • If you plan to use the distributed workloads component, you must mirror the Ray cluster image.
Note

This procedure uses the oc-mirror plugin v2; the oc-mirror plugin v1 is now deprecated. For more information, see Changes from oc-mirror plugin v1 to v2 in the OpenShift documentation.

Procedure

  1. Create a mirror registry. See Creating a mirror registry with mirror registry for Red Hat OpenShift in the OpenShift Container Platform documentation.
  2. To mirror registry images, install the oc-mirror OpenShift CLI plugin v2 on your mirroring machine running Linux. See Installing the oc-mirror OpenShift CLI plugin in the OpenShift Container Platform documentation.

    Important

    The oc-mirror plugin v1 is deprecated. Red Hat recommends that you use the oc-mirror plugin v2 for continued support and improvements.

  3. Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror. See Configuring credentials that allow images to be mirrored in the OpenShift Container Platform documentation.
  4. Open the example image set configuration file (rhoai-<version>.md) from the disconnected installer helper repository and examine its contents.

    The disconnected installer helper file includes a list of Additional images required to install OpenShift AI in a disconnected environment, as well as a list of older Unsupported images provided for reference only. These older images are no longer maintained by Red Hat but are included for convenience, such as when importing older resources or maintaining compatibility with previous environments.

  5. Using the example image set configuration file, create a file called imageset-config.yaml and populate it with values suitable for the image set configuration in your deployment.

    • To view a list of the available OpenShift versions, run the following command. This might take several minutes. If the command returns errors, repeat the steps in Configuring credentials that allow images to be mirrored.

      oc-mirror list operators
    • To see the available channels for a package in a specific version of OpenShift Container Platform (for example, 4.18), run the following command:

      oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.18 --package=<package_name>
    • For information about subscription update channels, see Understanding update channels.

      Important

      The example image set configurations are for demonstration purposes only and might need further alterations depending on your deployment.

      To identify the attributes most suitable for your deployment, see Image set configuration parameters and Image set configuration examples in the OpenShift Container Platform documentation.

      The list of Unsupported images in the helper file is provided for reference only and should not be included in your mirrored image set unless you have a specific need to import older resources or maintain compatibility with previous environments.

      Example imageset-config.yaml

      kind: ImageSetConfiguration
      apiVersion: mirror.openshift.io/v1alpha2
      mirror:
        operators:
          - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.19
            packages:
              - name: rhods-operator
                channels:
                  - name: stable
                    minVersion: 2.25.0
                    maxVersion: 2.25.0
              - name: <additional_operator_name>
                channels:
                  - name: stable
        additionalImages:
          - name: <additional_image_name>

  6. Download the specified image set configuration to a local file on your mirroring machine:

    • Replace <mirror_rhoai> with the target directory where you want to output the image set file.
    • The target directory path must start with file://.
    • The download might take several minutes.

      $ oc mirror -c imageset-config.yaml file://<mirror_rhoai> --v2
      Tip

      If the tls: failed to verify certificate: x509: certificate signed by unknown authority error is returned and you want to ignore it, set skipTLS to true in your image set configuration file and run the command again.

  7. Verify that the image set .tar files were created:

    $ ls <mirror_rhoai>

    Example output

    mirror_000001.tar, mirror_000002.tar

    If an archiveSize value was specified in the image set configuration file, the image set might be separated into multiple .tar files.

  8. Optional: Verify that total size of the image set .tar files is around 75 GB:

    $ du -h --max-depth=1 ./<mirror_rhoai>/

    If the total size of the image set is significantly less than 75 GB, run the oc mirror command again.

  9. Upload the contents of the generated image set to your target mirror registry:

    • Replace <mirror_rhoai> with the directory that contains your image set .tar files.
    • Replace <registry.example.com:5000> with your mirror registry.

      $ oc mirror -c imageset-config.yaml --from file://<mirror_rhoai> docker://<registry.example.com:5000> --v2
      Tip

      If the tls: failed to verify certificate: x509: certificate signed by unknown authority error is returned and you want to ignore it, run the following command:

      $ oc mirror --dest-tls-verify false --from=./<mirror_rhoai> docker://<registry.example.com:5000> --v2
  10. Log in to your target OpenShift cluster using the OpenShift CLI as a user with the cluster-admin role.
  11. Verify that the YAML files are present for the ImageDigestMirrorSet and CatalogSource resources:

    • Replace <mirror_rhoai> with the directory that contains your image set .tar files.

      $ ls <mirror_rhoai>/working-dir/cluster-resources/

      Example output

      cs-redhat-operator-index.yaml
      idms-oc-mirror.yaml

  12. Install the generated resources into the cluster:

    • Replace <oc_mirror_workspace_path> with the path to your oc mirror workspace.

      $ oc apply -f <oc_mirror_workspace_path>/working-dir/cluster-resources

Verification

  • Verify that the CatalogSource and pod were created successfully:

    $ oc get catalogsource,pod -n openshift-marketplace

    This should return at least one catalog and two pods.

  • Check that the Red Hat OpenShift AI Operator exists in the OperatorHub:

    1. Log in to the OpenShift web console.
    2. Click Operators OperatorHub.

      The OperatorHub page opens.

    3. Confirm that the Red Hat OpenShift AI Operator is shown.
  • If you mirrored additional operators, check that those operators exist in the OperatorHub.

3.3. Configuring custom namespaces

By default, OpenShift AI uses the following predefined namespaces:

  • redhat-ods-operator contains the Red Hat OpenShift AI Operator
  • redhat-ods-applications includes the dashboard and other required components of OpenShift AI
  • rhods-notebooks is where basic workbenches are deployed by default

If needed, you can define custom namespaces to use instead of the predefined ones before installing OpenShift AI. This flexibility supports environments with naming policies or conventions and allows cluster administrators to control where components such as workbenches are deployed.

Namespaces created by OpenShift AI typically include openshift or redhat in their name. Do not rename these system namespaces because they are required for OpenShift AI to function properly.

Prerequisites

  • You have access to an OpenShift AI cluster with cluster administrator privileges.
  • You have installed the OpenShift CLI (oc) as described in the appropriate documentation for your cluster:

  • You have not yet installed the Red Hat OpenShift AI Operator.

Procedure

  1. In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI (oc) as shown in the following example:

    oc login <openshift_cluster_url> -u <admin_username> -p <password>
  2. Optional: To configure a custom operator namespace:

    1. Create a namespace YAML file named operator-namespace.yaml.

      apiVersion: v1
      kind: Namespace
      metadata:
        name: <operator-namespace> 
      1
      1
      Defines the operator namespace.
    2. Create the namespace in your OpenShift cluster.

      $ oc create -f operator-namespace.yaml

      You see output similar to the following:

      namespace/<operator-namespace> created
    3. When you install the Red Hat OpenShift AI Operator, use this namespace instead of redhat-ods-operator.
  3. Optional: To configure a custom applications namespace:

    1. Create a namespace YAML file named applications-namespace.yaml.

      apiVersion: v1
      kind: Namespace
      metadata:
        name: <applications-namespace> 
      1
      
        labels:
          opendatahub.io/application-namespace: 'true' 
      2
      1
      Defines the applications namespace.
      2
      Adds the required label.
    2. Create the namespace in your OpenShift cluster.

      $ oc create -f applications-namespace.yaml

      You see output similar to the following:

      namespace/<applications-namespace> created
  4. Optional: To configure a custom workbench namespace:

    1. Create a namespace YAML file named workbench-namespace.yaml.

      apiVersion: v1
      kind: Namespace
      metadata:
        name: <workbench-namespace> 
      1
      1
      Defines the workbench namespace.
    2. Create the namespace in your OpenShift cluster.

      $ oc create -f workbench-namespace.yaml

      You see output similar to the following:

      namespace/<workbench-namespace> created
    3. When you install the Red Hat OpenShift AI components, specify this namespace for the spec.workbenches.workbenchNamespace field. You cannot change the default workbench namespace after you have installed the Red Hat OpenShift AI Operator.

3.4. Installing the Red Hat OpenShift AI Operator

This section shows how to install the Red Hat OpenShift AI Operator on your OpenShift cluster using the command-line interface (CLI) and the OpenShift web console.

Note

If you want to upgrade from a previous version of OpenShift AI rather than performing a new installation, see Upgrading OpenShift AI in a disconnected environment.

Note

If your OpenShift cluster uses a proxy to access the Internet, you can configure the proxy settings for the Red Hat OpenShift AI Operator. See Overriding proxy settings of an Operator for more information.

The following procedure shows how to use the OpenShift CLI (oc) to install the Red Hat OpenShift AI Operator on your OpenShift cluster. You must install the Operator before you can install OpenShift AI components on the cluster.

Prerequisites

  • You have a running OpenShift cluster, version 4.16 or greater, configured with a default storage class that can be dynamically provisioned.
  • You have cluster administrator privileges for your OpenShift cluster.
  • You have installed the OpenShift CLI (oc) as described in the appropriate documentation for your cluster:

  • If you are using custom namespaces, you have created and labeled them as required.

    Note

    The example commands in this procedure use the predefined operator namespace. If you are using a custom operator namespace, replace redhat-ods-operator with your namespace.

  • You have mirrored the required container images to a private registry. See Mirroring images to a private registry for a disconnected installation.

Procedure

  1. Open a new terminal window.
  2. Follow these steps to log in to your OpenShift cluster as a cluster administrator:

    1. In the upper-right corner of the OpenShift web console, click your user name and select Copy login command.
    2. After you have logged in, click Display token.
    3. Copy the Log in with this token command and paste it in your terminal.

      $ oc login --token=<token> --server=<openshift_cluster_url>
  3. Create a namespace for installation of the Operator by performing the following actions:

    Note

    If you have already created a custom namespace for the Operator, you can skip this step.

    1. Create a namespace YAML file named rhods-operator-namespace.yaml.

      apiVersion: v1
      kind: Namespace
      metadata:
        name: redhat-ods-operator 
      1
      1
      Defines the operator namespace.
    2. Create the namespace in your OpenShift cluster.

      $ oc create -f rhods-operator-namespace.yaml

      You see output similar to the following:

      namespace/redhat-ods-operator created
  4. Create an operator group for installation of the Operator by performing the following actions:

    1. Create an OperatorGroup object custom resource (CR) file, for example, rhods-operator-group.yaml.

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: rhods-operator
        namespace: redhat-ods-operator 
      1
      1
      Defines the operator namespace.
    2. Create the OperatorGroup object in your OpenShift cluster.

      $ oc create -f rhods-operator-group.yaml

      You see output similar to the following:

      operatorgroup.operators.coreos.com/rhods-operator created
  5. Create a subscription for installation of the Operator by performing the following actions:

    1. Create a Subscription object CR file, for example, rhods-operator-subscription.yaml.

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: rhods-operator
        namespace: redhat-ods-operator 
      1
      
      spec:
        name: rhods-operator
        channel: <channel> 
      2
      
        source: cs-redhat-operator-index
        sourceNamespace: openshift-marketplace
        startingCSV: rhods-operator.x.y.z 
      3
      1
      Defines the operator namespace.
      2
      Sets the update channel. You must specify a value of fast, stable, stable-x.y eus-x.y, or alpha. For more information, see Understanding update channels.
      3
      Optional: Sets the operator version. If you do not specify a value, the subscription defaults to the latest operator version. For more information, see the Red Hat OpenShift AI Self-Managed Life Cycle Knowledgebase article.
    2. Create the Subscription object in your OpenShift cluster to install the Operator.

      $ oc create -f rhods-operator-subscription.yaml

      You see output similar to the following:

      subscription.operators.coreos.com/rhods-operator created

Verification

  • In the OpenShift web console, click Operators Installed Operators and confirm that the Red Hat OpenShift AI Operator shows one of the following statuses:

    • Installing - installation is in progress; wait for this to change to Succeeded. This might take several minutes.
    • Succeeded - installation is successful.

The following procedure shows how to use the OpenShift web console to install the Red Hat OpenShift AI Operator on your cluster. You must install the Operator before you can install OpenShift AI components on the cluster.

Prerequisites

  • You have a running OpenShift cluster, version 4.16 or greater, configured with a default storage class that can be dynamically provisioned.
  • You have cluster administrator privileges for your OpenShift cluster.
  • If you are using custom namespaces, you have created and labeled them as required.
  • You have mirrored the required container images to a private registry. See Mirroring images to a private registry for a disconnected installation.

Procedure

  1. Log in to the OpenShift web console as a cluster administrator.
  2. In the web console, click Operators OperatorHub.
  3. On the OperatorHub page, locate the Red Hat OpenShift AI Operator by scrolling through the available Operators or by typing Red Hat OpenShift AI into the Filter by keyword box.
  4. Click the Red Hat OpenShift AI tile. The Red Hat OpenShift AI information pane opens.
  5. Select a Channel. For information about subscription update channels, see Understanding update channels.
  6. Select a Version.
  7. Click Install. The Install Operator page opens.
  8. Review or change the selected channel and version as needed.
  9. For Installation mode, note that the only available value is All namespaces on the cluster (default). This installation mode makes the Operator available to all namespaces in the cluster.
  10. For Installed Namespace, choose one of the following options:

    • To use the predefined operator namespace, select the Operator recommended Namespace: redhat-ods-operator option.
    • To use the custom operator namespace that you created, select the Select a Namespace option, and then select the namespace from the drop-down list.
  11. For Update approval, select one of the following update strategies:

    • Automatic: Your environment attempts to install new updates when they are available based on the content of your mirror.
    • Manual: A cluster administrator must approve any new updates before installation begins.

      Important

      By default, the Red Hat OpenShift AI Operator follows a sequential update process. This means that if there are several versions between the current version and the target version, Operator Lifecycle Manager (OLM) upgrades the Operator to each of the intermediate versions before it upgrades it to the final, target version.

      If you configure automatic upgrades, OLM automatically upgrades the Operator to the latest available version. If you configure manual upgrades, a cluster administrator must manually approve each sequential update between the current version and the final, target version.

      For information about supported versions, see the Red Hat OpenShift AI Life Cycle Knowledgebase article.

  12. Click Install.

    The Installing Operators pane appears. When the installation finishes, a checkmark appears next to the Operator name.

Verification

  • In the OpenShift web console, click Operators Installed Operators and confirm that the Red Hat OpenShift AI Operator shows one of the following statuses:

    • Installing - installation is in progress; wait for this to change to Succeeded. This might take several minutes.
    • Succeeded - installation is successful.

You can use the OpenShift command-line interface (CLI) or OpenShift web console to install and manage components of Red Hat OpenShift AI on your OpenShift cluster.

To install Red Hat OpenShift AI components by using the OpenShift CLI (oc), you must create and configure a DataScienceCluster object.

Important

The following procedure describes how to create and configure a DataScienceCluster object to install Red Hat OpenShift AI components as part of a new installation.

Prerequisites

Procedure

  1. Open a new terminal window.
  2. Follow these steps to log in to your OpenShift cluster as a cluster administrator:

    1. In the upper-right corner of the OpenShift web console, click your user name and select Copy login command.
    2. After you have logged in, click Display token.
    3. Copy the Log in with this token command and paste it in your terminal.

      $ oc login --token=<token> --server=<openshift_cluster_url>
  3. Create a DataScienceCluster object custom resource (CR) file, for example, rhods-operator-dsc.yaml.

    apiVersion: datasciencecluster.opendatahub.io/v1
    kind: DataScienceCluster
    metadata:
      name: default-dsc
    spec:
      components:
        codeflare:
          managementState: Removed
        dashboard:
          managementState: Removed
        datasciencepipelines:
          argoWorkflowsControllers:
            managementState: Removed 
    1
    
          managementState: Removed
        feastoperator:
          managementState: Removed
        kserve:
          managementState: Removed 
    2
     
    3
    
        kueue:
          defaultClusterQueueName: default
          defaultLocalQueueName: default
          managementState: Removed
        llamastackoperator:
          managementState: Removed
        modelmeshserving:
          managementState: Removed
        modelregistry:
          managementState: Removed
          registriesNamespace: {mr-default-namespace}
        ray:
          managementState: Removed
        trainingoperator:
          managementState: Removed
        trustyai:
          managementState: Removed
        workbenches:
          managementState: Removed
          workbenchNamespace: {workbench-default-namespace} 
    4
    1
    To use your own Argo Workflows instance with the datasciencepipelines component, set argoWorkflowsControllers.managementState to Removed. This allows you to integrate with a managed Argo Workflows installation already on your OpenShift cluster and avoid conflicts with the embedded controller. See Configuring pipelines with your own Argo Workflows instance.
    2
    To fully install the KServe component, which is used by the single-model serving platform to serve large models, you must install Operators for Red Hat OpenShift Service Mesh and Red Hat OpenShift Serverless and perform additional configuration. See Installing the single-model serving platform.
    3
    If you have not enabled the KServe component (that is, you set the value of the managementState field to Removed), you must also disable the dependent Service Mesh component to avoid errors. See Disabling KServe dependencies.
    4
    To use the predefined workbench namespace, set this value to rhods-notebooks or omit this line. To use a custom workbench namespace, set this value to your namespace.
  4. In the spec.components section of the CR, for each OpenShift AI component shown, set the value of the managementState field to either Managed or Removed. These values are defined as follows:

    Managed
    The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so.
    Removed
    The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it.
    Important
  5. Create the DataScienceCluster object in your OpenShift cluster to install the specified OpenShift AI components.

    $ oc create -f rhods-operator-dsc.yaml

    You see output similar to the following:

    datasciencecluster.datasciencecluster.opendatahub.io/default created

Verification

  1. Confirm that there is at least one running pod for each component:

    1. In the OpenShift web console, click Workloads Pods.
    2. In the Project list at the top of the page, select redhat-ods-applications.
    3. In the applications namespace, confirm that there are one or more running pods for each of the OpenShift AI components that you installed.
  2. Confirm the status of all installed components:

    1. In the OpenShift web console, click Operators Installed Operators.
    2. Click the Red Hat OpenShift AI Operator.
    3. Click the Data Science Cluster tab.
    4. For the DataScienceCluster object called default-dsc, verify that the status is Phase: Ready.

      Note

      When you edit the spec.components section to change the installation status of a component, the default-dsc status also changes. During the initial installation, it might take a few minutes for the status phase to change from Progressing to Ready. You can access the OpenShift AI dashboard before the default-dsc status phase is Ready, but all components might not be ready.

    5. Click the default-dsc link to display the data science cluster details.
    6. Select the YAML tab.
    7. In the status.installedComponents section, confirm that the components you installed have a status value of true.

      Note

      If a component shows with the component-name: {} format in the spec.components section of the CR, the component is not installed.

  3. In the OpenShift AI dashboard, users can view the list of the installed OpenShift AI components, their corresponding source (upstream) components, and the versions of the installed components, as described in Viewing installed OpenShift AI components.

To install Red Hat OpenShift AI components by using the OpenShift web console, you must create and configure a DataScienceCluster object.

Important

The following procedure describes how to create and configure a DataScienceCluster object to install Red Hat OpenShift AI components as part of a new installation.

Prerequisites

  • The Red Hat OpenShift AI Operator is installed on your OpenShift cluster. See Installing the Red Hat OpenShift AI Operator.
  • You have cluster administrator privileges for your OpenShift cluster.
  • If you are using custom namespaces, you have created the namespaces.

Procedure

  1. Log in to the OpenShift web console as a cluster administrator.
  2. In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator.
  3. Click the Data Science Cluster tab.
  4. Click Create DataScienceCluster.
  5. For Configure via, select YAML view.

    An embedded YAML editor opens showing a default custom resource (CR) for the DataScienceCluster object, similar to the following example:

    apiVersion: datasciencecluster.opendatahub.io/v1
    kind: DataScienceCluster
    metadata:
      name: default-dsc
    spec:
      components:
        codeflare:
          managementState: Removed
        dashboard:
          managementState: Removed
        datasciencepipelines:
          argoWorkflowsControllers:
            managementState: Removed 
    1
    
          managementState: Removed
        feastoperator:
          managementState: Removed
        kserve:
          managementState: Removed 
    2
     
    3
    
        kueue:
          defaultClusterQueueName: default
          defaultLocalQueueName: default
          managementState: Removed
        llamastackoperator:
          managementState: Removed
        modelmeshserving:
          managementState: Removed
        modelregistry:
          managementState: Removed
          registriesNamespace: {mr-default-namespace}
        ray:
          managementState: Removed
        trainingoperator:
          managementState: Removed
        trustyai:
          managementState: Removed
        workbenches:
          managementState: Removed
          workbenchNamespace: {workbench-default-namespace} 
    4
    1
    To use your own Argo Workflows instance with the datasciencepipelines component, set argoWorkflowsControllers.managementState to Removed. This allows you to integrate with a managed Argo Workflows installation already on your OpenShift cluster and avoid conflicts with the embedded controller. See Configuring pipelines with your own Argo Workflows instance.
    2
    To fully install the KServe component, which is used by the single-model serving platform to serve large models, you must install Operators for Red Hat OpenShift Service Mesh and Red Hat OpenShift Serverless and perform additional configuration. See Installing the single-model serving platform.
    3
    If you have not enabled the KServe component (that is, you set the value of the managementState field to Removed), you must also disable the dependent Service Mesh component to avoid errors. See Disabling KServe dependencies.
    4
    To use the predefined workbench namespace, set this value to rhods-notebooks or omit this line. To use a custom workbench namespace, set this value to your namespace.
  6. In the spec.components section of the CR, for each OpenShift AI component shown, set the value of the managementState field to either Managed or Removed. These values are defined as follows:

    Managed
    The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so.
    Removed
    The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it.
    Important
  7. Click Create.

Verification

  1. Confirm the status of all installed components:

    1. In the OpenShift web console, click Operators Installed Operators.
    2. Click the Red Hat OpenShift AI Operator.
    3. Click the Data Science Cluster tab.
    4. For the DataScienceCluster object called default-dsc, verify that the status is Phase: Ready.

      Note

      When you edit the spec.components section to change the installation status of a component, the default-dsc status also changes. During the initial installation, it might take a few minutes for the status phase to change from Progressing to Ready. You can access the OpenShift AI dashboard before the default-dsc status phase is Ready, but all components might not be ready.

    5. Click the default-dsc link to display the data science cluster details.
    6. Select the YAML tab.
    7. In the status.installedComponents section, confirm that the components you installed have a status value of true.

      Note

      If a component shows with the component-name: {} format in the spec.components section of the CR, the component is not installed.

  2. Confirm that there is at least one running pod for each component:

    1. In the OpenShift web console, click Workloads Pods.
    2. In the Project list at the top of the page, select redhat-ods-applications or your custom applications namespace.
    3. In the applications namespace, confirm that there are one or more running pods for each of the OpenShift AI components that you installed.
  3. In the OpenShift AI dashboard, users can view the list of the installed OpenShift AI components, their corresponding source (upstream) components, and the versions of the installed components, as described in Viewing installed OpenShift AI components.

You can use the OpenShift web console to update the installation status of components of Red Hat OpenShift AI on your OpenShift cluster.

Important

If you upgraded OpenShift AI, the upgrade process automatically used the values of the previous version’s DataScienceCluster object. New components are not automatically added to the DataScienceCluster object.

After upgrading OpenShift AI:

  • Inspect the default DataScienceCluster object to check and optionally update the managementState status of the existing components.
  • Add any new components to the DataScienceCluster object.

Prerequisites

  • The Red Hat OpenShift AI Operator is installed on your OpenShift cluster.
  • You have cluster administrator privileges for your OpenShift cluster.

Procedure

  1. Log in to the OpenShift web console as a cluster administrator.
  2. In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator.
  3. Click the Data Science Cluster tab.
  4. On the DataScienceClusters page, click the default-dsc object.
  5. Click the YAML tab.

    An embedded YAML editor opens showing the default custom resource (CR) for the DataScienceCluster object, similar to the following example:

    apiVersion: datasciencecluster.opendatahub.io/v1
    kind: DataScienceCluster
    metadata:
      name: default-dsc
    spec:
      components:
        codeflare:
          managementState: Removed
        dashboard:
          managementState: Removed
        datasciencepipelines:
          managementState: Removed
        kserve:
          managementState: Removed
        kueue:
          managementState: Removed
        llamastackoperator:
          managementState: Removed
        modelmeshserving:
          managementState: Removed
        ray:
          managementState: Removed
        trainingoperator:
          managementState: Removed
        trustyai:
          managementState: Removed
        workbenches:
          managementState: Removed
          workbenchNamespace: rhods-notebooks
  6. In the spec.components section of the CR, for each OpenShift AI component shown, set the value of the managementState field to either Managed or Removed. These values are defined as follows:

    Managed
    The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so.
    Removed
    The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it.
    Important
  7. Click Save.

    For any components that you updated, OpenShift AI initiates a rollout that affects all pods to use the updated image.

  8. If you are upgrading from OpenShift AI 2.19 or earlier, upgrade the Authorino Operator to the stable update channel, version 1.2.1 or later.

    Important

    If you are upgrading the Authorino Operator to the stable update channel, version 1.2.1 or later in a disconnected environment, use the following upgrade procedure described in the release notes: RHOAIENG-24786 - Upgrading the Authorino Operator from Technical Preview to Stable fails in disconnected environments. Otherwise, the upgrade can fail.

Verification

  1. Confirm that there is at least one running pod for each component:

    1. In the OpenShift web console, click Workloads Pods.
    2. In the Project list at the top of the page, select redhat-ods-applications or your custom applications namespace.
    3. In the applications namespace, confirm that there are one or more running pods for each of the OpenShift AI components that you installed.
  2. Confirm the status of all installed components:

    1. In the OpenShift web console, click Operators Installed Operators.
    2. Click the Red Hat OpenShift AI Operator.
    3. Click the Data Science Cluster tab and select the DataScienceCluster object called default-dsc.
    4. Select the YAML tab.
    5. In the status.installedComponents section, confirm that the components you installed have a status value of true.

      Note

      If a component shows with the component-name: {} format in the spec.components section of the CR, the component is not installed.

  3. In the OpenShift AI dashboard, users can view the list of the installed OpenShift AI components, their corresponding source (upstream) components, and the versions of the installed components, as described in Viewing installed OpenShift AI components.

3.5.4. Viewing installed OpenShift AI components

In the Red Hat OpenShift AI dashboard, you can view a list of the installed OpenShift AI components, their corresponding source (upstream) components, and the versions of the installed components.

Prerequisites

  • OpenShift AI is installed in your OpenShift cluster.

Procedure

  1. Log in to the OpenShift AI dashboard.
  2. In the top navigation bar, click the help icon ( Help icon ) and then select About.

Verification

The About page shows a list of the installed OpenShift AI components along with their corresponding upstream components and upstream component versions.

Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2026 Red Hat
返回顶部