Chapter 3. Deploying OpenShift AI in a disconnected environment
You cannot upgrade from OpenShift AI 2.25 or any earlier version to 3.0. OpenShift AI 3.0 introduces significant technology and component changes and is intended for new installations only. To use OpenShift AI 3.0, install the Red Hat OpenShift AI Operator on a cluster running OpenShift Container Platform 4.19 or later and select the fast-3.x channel.
Support for upgrades will be available in a later release, including upgrades from OpenShift AI 2.25 to a stable 3.x version.
For more information, see the Why upgrades to OpenShift AI 3.0 are not supported Knowledgebase article.
Read this section to understand how to deploy Red Hat OpenShift AI as a development and testing environment for data scientists in a disconnected environment. Disconnected clusters are on a restricted network, typically behind a firewall. In this case, clusters cannot access the remote registries where Red Hat provided OperatorHub sources reside. Instead, the Red Hat OpenShift AI Operator can be deployed to a disconnected environment using a private registry to mirror the images.
Installing OpenShift AI in a disconnected environment involves the following high-level tasks:
- Confirm that your OpenShift cluster meets all requirements. See Requirements for OpenShift AI Self-Managed.
- Mirror images to a private registry. See Mirroring images to a private registry for a disconnected installation.
- Install the Red Hat OpenShift AI Operator. See Installing the Red Hat OpenShift AI Operator.
- Install OpenShift AI components. See Installing and managing Red Hat OpenShift AI components.
- Complete any additional configuration required for the components you enabled. See the component-specific configuration sections for details.
- Configure user and administrator groups to provide user access to OpenShift AI. See Adding users to OpenShift AI user groups.
- Provide your users with the URL for the OpenShift cluster on which you deployed OpenShift AI. See Accessing the OpenShift AI dashboard.
3.1. Requirements for OpenShift AI Self-Managed Copy linkLink copied to clipboard!
You must meet the following requirements before you can install Red Hat OpenShift AI on your Red Hat OpenShift cluster in a disconnected environment.
3.1.1. Platform requirements Copy linkLink copied to clipboard!
Subscriptions
- A subscription for Red Hat OpenShift AI Self-Managed is required.
Contact your Red Hat account manager to purchase new subscriptions. If you do not yet have an account manager, complete the form at https://www.redhat.com/en/contact to request one.
Cluster administrator access
- Cluster administrator access is required to install OpenShift AI.
- You can use an existing cluster or create a new one that meets the supported version requirements.
Supported OpenShift versions
The following OpenShift versions are supported for installing OpenShift AI:
OpenShift Container Platform 4.19 to 4.20. See Installing a cluster in a disconnected environment.
- To deploy models by using Distributed Inference with llm-d, your cluster must be running version 4.20 or later.
- After installing the cluster, configure the Cluster Samples Operator as described in Configuring Samples Operator for a restricted cluster.
OpenShift Kubernetes Engine (OKE). See About OpenShift Kubernetes Engine.
NoteWhile OpenShift Kubernetes Engine (OKE) typically restricts the installation of certain post-installation Operators, Red Hat provides a specific licensing exception for Red Hat OpenShift AI users. This exception exclusively applies to Operators used to support Red Hat OpenShift AI workloads. Installing or using these Operators for purposes unrelated to OpenShift AI is a violation of the OKE service agreement.
The following Operators are required dependencies for Red Hat OpenShift AI 2.x and 3.x. These Operators are not supported on OKE, but can be installed if given an exception.
Expand Red Hat OpenShift AI version Operator (Unsupported, Exception Required) 2.x
Authorino Operator, Service Mesh Operator, Serverless Operator
3.x
Job-set-operator, openshift-custom-metrics-autoscaler-operator, cert-manager Operator, Leader Worker Set Operator, Red Hat Connectivity Link Operator, Kueue Operator (RHBOK), SR-IOV Operator, GPU Operator (with custom configurations), OpenTelemetry, Tempo, Cluster Observability Operator.
In OpenStack, CodeReady Containers (CRC), and other private cloud environments without integrated external DNS, you must manually configure DNS A or CNAME records after installing the Operator and components, when the LoadBalancer IP becomes available. For more information, see Configuring External DNS for RHOAI 3.x on OpenStack and Private Clouds.
Cluster configuration
- A minimum of 2 worker nodes with at least 8 CPUs and 32 GiB RAM each is required to install the Operator.
- For single-node OpenShift clusters, the node must have at least 32 CPUs and 128 GiB RAM.
- Additional resources are required depending on your workloads.
- Open Data Hub must not be installed on the cluster.
Storage requirements
Your cluster must have a default storage class that supports dynamic provisioning. To confirm that a default storage class is configured, run the following command:
oc get storageclass
oc get storageclassCopy to Clipboard Copied! Toggle word wrap Toggle overflow If no storage class is marked as the default, see Changing the default storage class in the OpenShift Container Platform documentation.
Identity provider configuration
- An identity provider must be configured for your OpenShift cluster, which provides authentication for OpenShift AI. See Understanding identity provider configuration.
You must access the cluster as a user with the
cluster-adminrole; thekubeadminuser is not allowed. For more information, see the relevant documentation:- OpenShift Container Platform: Creating a cluster admin
- OpenShift Dedicated: Managing OpenShift Dedicated administrators
- ROSA: Creating a cluster administrator user for quick cluster access
Internet access on the mirroring machine
Along with internet access, the following domains must be accessible to mirror images required for the OpenShift AI installation:
-
cdn.redhat.com -
subscription.rhn.redhat.com -
registry.access.redhat.com -
registry.redhat.io -
quay.io
-
For environments that build or customize CUDA-based images using NVIDIA’s base images, or that directly pull artifacts from the NVIDIA NGC catalog, the following domains must also be accessible:
-
ngc.download.nvidia.cn -
developer.download.nvidia.com
-
Access to these NVIDIA domains is not required for standard OpenShift AI installations. The CUDA-based container images used by OpenShift AI are prebuilt and hosted on Red Hat’s registry at registry.redhat.io.
Image mirroring
- For disconnected environments, you must mirror all required images to your private registry before installing OpenShift AI. See the RHOAI disconnected installation guide for details.
Object storage
- Several components of OpenShift AI require or can use S3-compatible object storage, such as AWS S3, MinIO, Ceph, or IBM Cloud Storage. Object storage provides HTTP-based access to data by using the S3 API, which is the standard interface for most object storage services.
- Object storage must be reachable from the OpenShift cluster and deployed within the same disconnected network.
Object storage is required for:
- Single-model serving platform, for storing and deploying models.
- AI pipelines, for storing artifacts, logs, and intermediate results.
Object storage can also be used by:
- Workbenches, for accessing large datasets.
- Kueue-based workloads, for reading input data and writing output results.
- Code executed inside pipelines, for persisting generated models or other artifacts.
Custom namespaces
-
By default, OpenShift AI uses predefined namespaces, but you can define custom namespaces for the Operator, applications, and workbenches if needed. Namespaces created by OpenShift AI typically include
openshiftorredhatin their name. Do not rename these system namespaces because they are required for OpenShift AI to function properly. - If you use custom namespaces, create and label them before installing the OpenShift AI Operator. See Configuring custom namespaces.
3.1.2. Component requirements Copy linkLink copied to clipboard!
Meet the requirements for the components and capabilities that you plan to use.
Workbenches (workbenches)
- To use a custom workbench namespace, create the namespace before installing the OpenShift AI Operator. See Configuring custom namespaces.
AI Pipelines (aipipelines)
- To store your pipeline artifacts in an S3-compatible object storage bucket so that you do not consume local storage, configure write access to your S3 bucket on your storage account.
- If your cluster is running in FIPS mode, any custom container images for data science pipelines must be based on UBI 9 or RHEL 9. This ensures compatibility with FIPS-approved pipeline components and prevents errors related to mismatched OpenSSL or GNU C Library (glibc) versions.
- To use your own Argo Workflows instance, after installing the OpenShift AI Operator see Configuring pipelines with your own Argo Workflows instance.
Kueue-based workloads (kueue, ray, trainingoperator)
- Install the Red Hat build of Kueue Operator.
- Install the cert-manager Operator.
- See Configuring workload management with Kueue and Installing the distributed workloads components.
Model serving platform (kserve)
- Install the cert-manager Operator.
Distributed Inference with llm-d (advanced kserve)
- Install the cert-manager Operator.
- Install the Red Hat Connectivity Link Operator.
- Install the Red Hat Leader Worker Set Operator.
- See Deploying models by using Distributed Inference with llm-d.
Llama Stack and RAG workloads (llamastackoperator)
- Install the Llama Stack Operator.
- Install the Red Hat OpenShift Service Mesh Operator 3.x.
- Install the cert-manager Operator.
- Ensure you have GPU-enabled nodes available on your cluster.
- Install the Node Feature Discovery Operator.
- Install the NVIDIA GPU Operator.
- Configure access to S3-compatible object storage for your model artifacts.
- See Working with Llama Stack.
Model registry (modelregistry)
- Configure access to an external MySQL database 5.x or later; 8.x is recommended.
- Configure access to S3-compatible object storage.
- See Creating a model registry.
3.2. Mirroring images to a private registry for a disconnected installation Copy linkLink copied to clipboard!
You can install the Red Hat OpenShift AI Operator to your OpenShift cluster in a disconnected environment by mirroring the required container images to a private container registry. After mirroring the images to a container registry, you can install Red Hat OpenShift AI Operator by using OperatorHub.
You can use the mirror registry for Red Hat OpenShift, a small-scale container registry, as a target for mirroring the required container images for OpenShift AI in a disconnected environment. Using the mirror registry for Red Hat OpenShift is optional if another container registry is already available in your installation environment.
Prerequisites
- You have cluster administrator access to a running OpenShift Container Platform cluster, version 4.19 or greater.
- You have credentials for Red Hat OpenShift Cluster Manager (https://console.redhat.com/openshift/).
- Your mirroring machine is running Linux, has 100 GB of space available, and has access to the Internet so that it can obtain the images to populate the mirror repository.
-
You have installed the OpenShift CLI (
oc). - You have reviewed the component requirements and identified all operators you must mirror in addition to the Red Hat OpenShift AI Operator. See Requirements for OpenShift AI Self-Managed.
This procedure uses the oc-mirror plugin v2; the oc-mirror plugin v1 is now deprecated. For more information, see Changes from oc-mirror plugin v1 to v2 in the OpenShift documentation.
Procedure
- Create a mirror registry. See Creating a mirror registry with mirror registry for Red Hat OpenShift in the OpenShift Container Platform documentation.
To mirror registry images, install the
oc-mirrorOpenShift CLI plugin v2 on your mirroring machine running Linux. See Installing the oc-mirror OpenShift CLI plugin in the OpenShift Container Platform documentation.ImportantThe oc-mirror plugin v1 is deprecated. Red Hat recommends that you use the oc-mirror plugin v2 for continued support and improvements.
- Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror. See Configuring credentials that allow images to be mirrored in the OpenShift Container Platform documentation.
Open the example image set configuration file (
rhoai-<version>.md) from the disconnected installer helper repository and examine its contents.The disconnected installer helper file includes a list of Additional images required to install OpenShift AI in a disconnected environment, as well as a list of older Unsupported images provided for reference only. These older images are no longer maintained by Red Hat but are included for convenience, such as when importing older resources or maintaining compatibility with previous environments.
Using the example image set configuration file, create a file called
imageset-config.yamland populate it with values suitable for the image set configuration in your deployment.To view a list of the available OpenShift versions, run the following command. This might take several minutes. If the command returns errors, repeat the steps in Configuring credentials that allow images to be mirrored.
oc-mirror list operators
oc-mirror list operatorsCopy to Clipboard Copied! Toggle word wrap Toggle overflow To see the available channels for a package in a specific version of OpenShift Container Platform (for example, 4.19), run the following command:
oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.19 --package=<package_name>
oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.19 --package=<package_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For information about subscription update channels, see Understanding update channels.
ImportantThe example image set configurations are for demonstration purposes only and might need further alterations depending on your deployment.
To identify the attributes most suitable for your deployment, see Image set configuration parameters and Image set configuration examples in the OpenShift Container Platform documentation.
Example imageset-config.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Download the specified image set configuration to a local file on your mirroring machine:
-
Replace
<mirror_rhoai>with the target directory where you want to output the image set file. -
The target directory path must start with
file://. The download might take several minutes.
oc mirror -c imageset-config.yaml file://<mirror_rhoai> --v2
$ oc mirror -c imageset-config.yaml file://<mirror_rhoai> --v2Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipIf the
tls: failed to verify certificate: x509: certificate signed by unknown authorityerror is returned and you want to ignore it, setskipTLStotruein your image set configuration file and run the command again.
-
Replace
Verify that the image set
.tarfiles were created:ls <mirror_rhoai>
$ ls <mirror_rhoai>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
mirror_000001.tar, mirror_000002.tar
mirror_000001.tar, mirror_000002.tarCopy to Clipboard Copied! Toggle word wrap Toggle overflow If an
archiveSizevalue was specified in the image set configuration file, the image set might be separated into multiple.tarfiles.Optional: Verify that total size of the image set
.tarfiles is around 75 GB:du -h --max-depth=1 ./<mirror_rhoai>/
$ du -h --max-depth=1 ./<mirror_rhoai>/Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the total size of the image set is significantly less than 75 GB, run the
oc mirrorcommand again.Upload the contents of the generated image set to your target mirror registry:
-
Replace
<mirror_rhoai>with the directory that contains your image set.tarfiles. Replace
<registry.example.com:5000>with your mirror registry.oc mirror -c imageset-config.yaml --from file://<mirror_rhoai> docker://<registry.example.com:5000> --v2
$ oc mirror -c imageset-config.yaml --from file://<mirror_rhoai> docker://<registry.example.com:5000> --v2Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipIf the
tls: failed to verify certificate: x509: certificate signed by unknown authorityerror is returned and you want to ignore it, run the following command:oc mirror --dest-tls-verify false --from=./<mirror_rhoai> docker://<registry.example.com:5000> --v2
$ oc mirror --dest-tls-verify false --from=./<mirror_rhoai> docker://<registry.example.com:5000> --v2Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Replace
-
Log in to your target OpenShift cluster using the OpenShift CLI as a user with the
cluster-adminrole. Verify that the YAML files are present for the
ImageDigestMirrorSetandCatalogSourceresources:Replace
<mirror_rhoai>with the directory that contains your image set.tarfiles.ls <mirror_rhoai>/working-dir/cluster-resources/
$ ls <mirror_rhoai>/working-dir/cluster-resources/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
cs-redhat-operator-index.yaml idms-oc-mirror.yaml
cs-redhat-operator-index.yaml idms-oc-mirror.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Install the generated resources into the cluster:
Replace
<oc_mirror_workspace_path>with the path to your oc mirror workspace.oc apply -f <oc_mirror_workspace_path>/working-dir/cluster-resources
$ oc apply -f <oc_mirror_workspace_path>/working-dir/cluster-resourcesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
CatalogSourceand pod were created successfully:oc get catalogsource,pod -n openshift-marketplace
$ oc get catalogsource,pod -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow This should return at least one catalog and two pods.
Check that the Red Hat OpenShift AI Operator exists in the OperatorHub:
- Log in to the OpenShift web console.
Click Operators
OperatorHub. The OperatorHub page opens.
- Confirm that the Red Hat OpenShift AI Operator is shown.
- If you mirrored additional operators, check that those operators exist in the OperatorHub.
3.3. Configuring custom namespaces Copy linkLink copied to clipboard!
By default, OpenShift AI uses the following predefined namespaces:
-
redhat-ods-operatorcontains the Red Hat OpenShift AI Operator -
redhat-ods-applicationsincludes the dashboard and other required components of OpenShift AI -
rhods-notebooksis where basic workbenches are deployed by default
If needed, you can define custom namespaces to use instead of the predefined ones before installing OpenShift AI. This flexibility supports environments with naming policies or conventions and allows cluster administrators to control where components such as workbenches are deployed.
Namespaces created by OpenShift AI typically include openshift or redhat in their name. Do not rename these system namespaces because they are required for OpenShift AI to function properly.
Prerequisites
- You have access to an OpenShift AI cluster with cluster administrator privileges.
You have installed the OpenShift CLI (
oc) as described in the appropriate documentation for your cluster:- Installing the OpenShift CLI for OpenShift Container Platform
- Installing the OpenShift CLI for Red Hat OpenShift Service on AWS
- You have not yet installed the Red Hat OpenShift AI Operator.
Procedure
In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI (
oc) as shown in the following example:oc login <openshift_cluster_url> -u <admin_username> -p <password>
oc login <openshift_cluster_url> -u <admin_username> -p <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To configure a custom operator namespace:
Create a namespace YAML file named
operator-namespace.yaml.apiVersion: v1 kind: Namespace metadata: name: <operator-namespace>
apiVersion: v1 kind: Namespace metadata: name: <operator-namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Defines the operator namespace.
Create the namespace in your OpenShift cluster.
oc create -f operator-namespace.yaml
$ oc create -f operator-namespace.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow You see output similar to the following:
namespace/<operator-namespace> created
namespace/<operator-namespace> createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
When you install the Red Hat OpenShift AI Operator, use this namespace instead of
redhat-ods-operator.
Optional: To configure a custom applications namespace:
Create a namespace YAML file named
applications-namespace.yaml.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the namespace in your OpenShift cluster.
oc create -f applications-namespace.yaml
$ oc create -f applications-namespace.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow You see output similar to the following:
namespace/<applications-namespace> created
namespace/<applications-namespace> createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Optional: To configure a custom workbench namespace:
Create a namespace YAML file named
workbench-namespace.yaml.apiVersion: v1 kind: Namespace metadata: name: <workbench-namespace>
apiVersion: v1 kind: Namespace metadata: name: <workbench-namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Defines the workbench namespace.
Create the namespace in your OpenShift cluster.
oc create -f workbench-namespace.yaml
$ oc create -f workbench-namespace.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow You see output similar to the following:
namespace/<workbench-namespace> created
namespace/<workbench-namespace> createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
When you install the Red Hat OpenShift AI components, specify this namespace for the
spec.workbenches.workbenchNamespacefield. You cannot change the default workbench namespace after you have installed the Red Hat OpenShift AI Operator.
3.4. Installing the Red Hat OpenShift AI Operator Copy linkLink copied to clipboard!
This section shows how to install the Red Hat OpenShift AI Operator on your OpenShift cluster using the command-line interface (CLI) and the OpenShift web console.
If your OpenShift cluster uses a proxy to access the Internet, you can configure the proxy settings for the Red Hat OpenShift AI Operator. See Overriding proxy settings of an Operator for more information.
3.4.1. Installing the Red Hat OpenShift AI Operator by using the CLI Copy linkLink copied to clipboard!
The following procedure shows how to use the OpenShift CLI (oc) to install the Red Hat OpenShift AI Operator on your OpenShift cluster. You must install the Operator before you can install OpenShift AI components on the cluster.
Prerequisites
- You have a running OpenShift cluster, version 4.19 or greater, configured with a default storage class that can be dynamically provisioned.
- You have cluster administrator privileges for your OpenShift cluster.
You have installed the OpenShift CLI (
oc) as described in the appropriate documentation for your cluster:- Installing the OpenShift CLI for OpenShift Container Platform
- Installing the OpenShift CLI for Red Hat OpenShift Service on AWS
If you are using custom namespaces, you have created and labeled them as required.
NoteThe example commands in this procedure use the predefined operator namespace. If you are using a custom operator namespace, replace
redhat-ods-operatorwith your namespace.- You have mirrored the required container images to a private registry. See Mirroring images to a private registry for a disconnected installation.
Procedure
- Open a new terminal window.
Follow these steps to log in to your OpenShift cluster as a cluster administrator:
- In the upper-right corner of the OpenShift web console, click your user name and select Copy login command.
- After you have logged in, click Display token.
Copy the Log in with this token command and paste it in your terminal.
oc login --token=<token> --server=<openshift_cluster_url>
$ oc login --token=<token> --server=<openshift_cluster_url>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a namespace for installation of the Operator by performing the following actions:
NoteIf you have already created a custom namespace for the Operator, you can skip this step.
Create a namespace YAML file named
rhods-operator-namespace.yaml.apiVersion: v1 kind: Namespace metadata: name: redhat-ods-operator
apiVersion: v1 kind: Namespace metadata: name: redhat-ods-operator1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Defines the operator namespace.
Create the namespace in your OpenShift cluster.
oc create -f rhods-operator-namespace.yaml
$ oc create -f rhods-operator-namespace.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow You see output similar to the following:
namespace/redhat-ods-operator created
namespace/redhat-ods-operator createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an operator group for installation of the Operator by performing the following actions:
Create an
OperatorGroupobject custom resource (CR) file, for example,rhods-operator-group.yaml.apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: rhods-operator namespace: redhat-ods-operator
apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: rhods-operator namespace: redhat-ods-operator1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Defines the operator namespace.
Create the
OperatorGroupobject in your OpenShift cluster.oc create -f rhods-operator-group.yaml
$ oc create -f rhods-operator-group.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow You see output similar to the following:
operatorgroup.operators.coreos.com/rhods-operator created
operatorgroup.operators.coreos.com/rhods-operator createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a subscription for installation of the Operator by performing the following actions:
Create a
Subscriptionobject CR file, for example,rhods-operator-subscription.yaml.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Defines the operator namespace.
- 2
- Sets the update channel. You must specify a value of
fast,fast-x.y,stable,stable-x.yeus-x.y, oralpha. For more information, see Understanding update channels. - 3
- Optional: Sets the operator version. If you do not specify a value, the subscription defaults to the latest operator version. For more information, see the Red Hat OpenShift AI Self-Managed Life Cycle Knowledgebase article.
Create the
Subscriptionobject in your OpenShift cluster to install the Operator.oc create -f rhods-operator-subscription.yaml
$ oc create -f rhods-operator-subscription.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow You see output similar to the following:
subscription.operators.coreos.com/rhods-operator created
subscription.operators.coreos.com/rhods-operator createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
In the OpenShift web console, click Operators
Installed Operators and confirm that the Red Hat OpenShift AI Operator shows one of the following statuses: - Installing - installation is in progress; wait for this to change to Succeeded. This might take several minutes.
- Succeeded - installation is successful.
3.4.2. Installing the Red Hat OpenShift AI Operator by using the web console Copy linkLink copied to clipboard!
The following procedure shows how to use the OpenShift web console to install the Red Hat OpenShift AI Operator on your cluster. You must install the Operator before you can install OpenShift AI components on the cluster.
Prerequisites
- You have a running OpenShift cluster, version 4.19 or greater, configured with a default storage class that can be dynamically provisioned.
- You have cluster administrator privileges for your OpenShift cluster.
- If you are using custom namespaces, you have created and labeled them as required.
- You have mirrored the required container images to a private registry. See Mirroring images to a private registry for a disconnected installation.
Procedure
- Log in to the OpenShift web console as a cluster administrator.
-
In the web console, click Operators
OperatorHub. - On the OperatorHub page, locate the Red Hat OpenShift AI Operator by scrolling through the available Operators or by typing Red Hat OpenShift AI into the Filter by keyword box.
- Click the Red Hat OpenShift AI tile. The Red Hat OpenShift AI information pane opens.
- Select a Channel. For information about subscription update channels, see Understanding update channels.
- Select a Version.
- Click Install. The Install Operator page opens.
- Review or change the selected channel and version as needed.
- For Installation mode, note that the only available value is All namespaces on the cluster (default). This installation mode makes the Operator available to all namespaces in the cluster.
For Installed Namespace, choose one of the following options:
- To use the predefined operator namespace, select the Operator recommended Namespace: redhat-ods-operator option.
- To use the custom operator namespace that you created, select the Select a Namespace option, and then select the namespace from the drop-down list.
For Update approval, select one of the following update strategies:
- Automatic: Your environment attempts to install new updates when they are available based on the content of your mirror.
Manual: A cluster administrator must approve any new updates before installation begins.
ImportantBy default, the Red Hat OpenShift AI Operator follows a sequential update process. This means that if there are several versions between the current version and the target version, Operator Lifecycle Manager (OLM) upgrades the Operator to each of the intermediate versions before it upgrades it to the final, target version.
If you configure automatic upgrades, OLM automatically upgrades the Operator to the latest available version. If you configure manual upgrades, a cluster administrator must manually approve each sequential update between the current version and the final, target version.
For information about supported versions, see the Red Hat OpenShift AI Life Cycle Knowledgebase article.
Click Install.
The Installing Operators pane appears. When the installation finishes, a checkmark appears next to the Operator name.
Verification
In the OpenShift web console, click Operators
Installed Operators and confirm that the Red Hat OpenShift AI Operator shows one of the following statuses: - Installing - installation is in progress; wait for this to change to Succeeded. This might take several minutes.
- Succeeded - installation is successful.
3.5. Installing and managing Red Hat OpenShift AI components Copy linkLink copied to clipboard!
You can use the OpenShift command-line interface (CLI) or OpenShift web console to install and manage components of Red Hat OpenShift AI on your OpenShift cluster.
3.5.1. Installing Red Hat OpenShift AI components by using the CLI Copy linkLink copied to clipboard!
To install Red Hat OpenShift AI components by using the OpenShift CLI (oc), you must create and configure a DataScienceCluster object.
The following procedure describes how to create and configure a DataScienceCluster object to install Red Hat OpenShift AI components as part of a new installation.
For information about changing the installation status of OpenShift AI components after installation, see Updating the installation status of Red Hat OpenShift AI components by using the web console.
Prerequisites
- The Red Hat OpenShift AI Operator is installed on your OpenShift cluster. See Installing the Red Hat OpenShift AI Operator.
- You have cluster administrator privileges for your OpenShift cluster.
You have installed the OpenShift CLI (
oc) as described in the appropriate documentation for your cluster:- Installing the OpenShift CLI for OpenShift Container Platform
- Installing the OpenShift CLI for Red Hat OpenShift Service on AWS
- If you are using custom namespaces, you have created the namespaces.
Procedure
- Open a new terminal window.
Follow these steps to log in to your OpenShift cluster as a cluster administrator:
- In the upper-right corner of the OpenShift web console, click your user name and select Copy login command.
- After you have logged in, click Display token.
Copy the Log in with this token command and paste it in your terminal.
oc login --token=<token> --server=<openshift_cluster_url>
$ oc login --token=<token> --server=<openshift_cluster_url>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
DataScienceClusterobject custom resource (CR) file, for example,rhods-operator-dsc.yaml.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- To use your own Argo Workflows instance with the
aipipelinescomponent, setargoWorkflowsControllers.managementStatetoRemoved. This allows you to integrate with a managed Argo Workflows installation already on your OpenShift cluster and avoid conflicts with the embedded controller. See Configuring pipelines with your own Argo Workflows instance. - 2
- To use the predefined workbench namespace, set this value to
rhods-notebooksor omit this line. To use a custom workbench namespace, set this value to your namespace.
In the
spec.componentssection of the CR, for each OpenShift AI component shown, set the value of themanagementStatefield to eitherManagedorRemoved. These values are defined as follows:- Managed
- The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so.
- Removed
- The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it.
Important- To learn how to fully install the KServe component, which is used by the single-model serving platform to serve large models, see Installing the model serving platform.
- To learn how to install the distributed workloads components, see Installing the distributed workloads components.
- To learn how to run distributed workloads in a disconnected environment, see Running distributed data science workloads in a disconnected environment.
Create the
DataScienceClusterobject in your OpenShift cluster to install the specified OpenShift AI components.oc create -f rhods-operator-dsc.yaml
$ oc create -f rhods-operator-dsc.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow You see output similar to the following:
datasciencecluster.datasciencecluster.opendatahub.io/default created
datasciencecluster.datasciencecluster.opendatahub.io/default createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Confirm that there is at least one running pod for each component:
-
In the OpenShift web console, click Workloads
Pods. -
In the Project list at the top of the page, select
redhat-ods-applications. - In the applications namespace, confirm that there are one or more running pods for each of the OpenShift AI components that you installed.
-
In the OpenShift web console, click Workloads
Confirm the status of all installed components:
-
In the OpenShift web console, click Operators
Installed Operators. - Click the Red Hat OpenShift AI Operator.
- Click the Data Science Cluster tab.
For the
DataScienceClusterobject calleddefault-dsc, verify that the status isPhase: Ready.NoteWhen you edit the
spec.componentssection to change the installation status of a component, thedefault-dscstatus also changes. During the initial installation, it might take a few minutes for the status phase to change fromProgressingtoReady. You can access the OpenShift AI dashboard before thedefault-dscstatus phase isReady, but all components might not be ready.-
Click the
default-dsclink to display the data science cluster details. - Select the YAML tab.
In the
status.installedComponentssection, confirm that the components you installed have a status value oftrue.NoteIf a component shows with the
component-name: {}format in thespec.componentssection of the CR, the component is not installed.
-
In the OpenShift web console, click Operators
- In the OpenShift AI dashboard, users can view the list of the installed OpenShift AI components, their corresponding source (upstream) components, and the versions of the installed components, as described in Viewing installed OpenShift AI components.
Next steps
- If you are using OpenStack, CodeReady Containers (CRC), or other private cloud environments without integrated external DNS, manually configure DNS A or CNAME records after the LoadBalancer IP becomes available. For more information, see Configuring External DNS for RHOAI 3.x on OpenStack and Private Clouds.
- Complete any additional configuration required for the components you enabled. See the component-specific configuration sections for details.
3.5.2. Installing Red Hat OpenShift AI components by using the web console Copy linkLink copied to clipboard!
To install Red Hat OpenShift AI components by using the OpenShift web console, you must create and configure a DataScienceCluster object.
The following procedure describes how to create and configure a DataScienceCluster object to install Red Hat OpenShift AI components as part of a new installation.
- For information about changing the installation status of OpenShift AI components after installation, see Updating the installation status of Red Hat OpenShift AI components by using the web console.
Prerequisites
- The Red Hat OpenShift AI Operator is installed on your OpenShift cluster. See Installing the Red Hat OpenShift AI Operator.
- You have cluster administrator privileges for your OpenShift cluster.
- If you are using custom namespaces, you have created the namespaces.
Procedure
- Log in to the OpenShift web console as a cluster administrator.
-
In the web console, click Operators
Installed Operators and then click the Red Hat OpenShift AI Operator. - Click the Data Science Cluster tab.
- Click Create DataScienceCluster.
For Configure via, select YAML view.
An embedded YAML editor opens showing a default custom resource (CR) for the
DataScienceClusterobject, similar to the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- To use your own Argo Workflows instance with the
aipipelinescomponent, setargoWorkflowsControllers.managementStatetoRemoved. This allows you to integrate with a managed Argo Workflows installation already on your OpenShift cluster and avoid conflicts with the embedded controller. See Configuring pipelines with your own Argo Workflows instance. - 2
- To use the predefined workbench namespace, set this value to
rhods-notebooksor omit this line. To use a custom workbench namespace, set this value to your namespace.
In the
spec.componentssection of the CR, for each OpenShift AI component shown, set the value of themanagementStatefield to eitherManagedorRemoved. These values are defined as follows:- Managed
- The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so.
- Removed
- The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it.
Important- To learn how to fully install the KServe component, which is used by the single-model serving platform to serve large models, see Installing the model serving platform.
- To learn how to install the distributed workloads components, see Installing the distributed workloads components.
- To learn how to run distributed workloads in a disconnected environment, see Running distributed data science workloads in a disconnected environment.
- Click Create.
Verification
Confirm the status of all installed components:
-
In the OpenShift web console, click Operators
Installed Operators. - Click the Red Hat OpenShift AI Operator.
- Click the Data Science Cluster tab.
For the
DataScienceClusterobject calleddefault-dsc, verify that the status isPhase: Ready.NoteWhen you edit the
spec.componentssection to change the installation status of a component, thedefault-dscstatus also changes. During the initial installation, it might take a few minutes for the status phase to change fromProgressingtoReady. You can access the OpenShift AI dashboard before thedefault-dscstatus phase isReady, but all components might not be ready.-
Click the
default-dsclink to display the data science cluster details. - Select the YAML tab.
In the
status.installedComponentssection, confirm that the components you installed have a status value oftrue.NoteIf a component shows with the
component-name: {}format in thespec.componentssection of the CR, the component is not installed.
-
In the OpenShift web console, click Operators
Confirm that there is at least one running pod for each component:
-
In the OpenShift web console, click Workloads
Pods. -
In the Project list at the top of the page, select
redhat-ods-applicationsor your custom applications namespace. - In the applications namespace, confirm that there are one or more running pods for each of the OpenShift AI components that you installed.
-
In the OpenShift web console, click Workloads
- In the OpenShift AI dashboard, users can view the list of the installed OpenShift AI components, their corresponding source (upstream) components, and the versions of the installed components, as described in Viewing installed OpenShift AI components.
Next steps
- If you are using OpenStack, CodeReady Containers (CRC), or other private cloud environments without integrated external DNS, manually configure DNS A or CNAME records after the LoadBalancer IP becomes available. For more information, see Configuring External DNS for RHOAI 3.x on OpenStack and Private Clouds.
- Complete any additional configuration required for the components you enabled. See the component-specific configuration sections for details.
3.5.3. Updating the installation status of Red Hat OpenShift AI components by using the web console Copy linkLink copied to clipboard!
You can use the OpenShift web console to update the installation status of components of Red Hat OpenShift AI on your OpenShift cluster.
Prerequisites
- The Red Hat OpenShift AI Operator is installed on your OpenShift cluster.
- You have cluster administrator privileges for your OpenShift cluster.
Procedure
- Log in to the OpenShift web console as a cluster administrator.
-
In the web console, click Operators
Installed Operators and then click the Red Hat OpenShift AI Operator. - Click the Data Science Cluster tab.
-
On the DataScienceClusters page, click the
default-dscobject. Click the YAML tab.
An embedded YAML editor opens showing the default custom resource (CR) for the
DataScienceClusterobject, similar to the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the
spec.componentssection of the CR, for each OpenShift AI component shown, set the value of themanagementStatefield to eitherManagedorRemoved. These values are defined as follows:- Managed
- The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so.
- Removed
- The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it.
Important- To learn how to install the KServe component, which is used by the single-model serving platform to serve large models, see Installing the model serving platform.
- To learn how to install the distributed workloads feature, see Installing the distributed workloads components.
- To learn how to run distributed workloads in a disconnected environment, see Running distributed data science workloads in a disconnected environment.
Click Save.
For any components that you updated, OpenShift AI initiates a rollout that affects all pods to use the updated image.
If you are upgrading from OpenShift AI 2.19 or earlier, upgrade the Authorino Operator to the
stableupdate channel, version 1.2.1 or later.ImportantIf you are upgrading the Authorino Operator to the
stableupdate channel, version 1.2.1 or later in a disconnected environment, use the following upgrade procedure described in the release notes: RHOAIENG-24786 - Upgrading the Authorino Operator from Technical Preview to Stable fails in disconnected environments. Otherwise, the upgrade can fail.
Verification
Confirm that there is at least one running pod for each component:
-
In the OpenShift web console, click Workloads
Pods. -
In the Project list at the top of the page, select
redhat-ods-applicationsor your custom applications namespace. - In the applications namespace, confirm that there are one or more running pods for each of the OpenShift AI components that you installed.
-
In the OpenShift web console, click Workloads
Confirm the status of all installed components:
-
In the OpenShift web console, click Operators
Installed Operators. - Click the Red Hat OpenShift AI Operator.
-
Click the Data Science Cluster tab and select the
DataScienceClusterobject calleddefault-dsc. - Select the YAML tab.
In the
status.installedComponentssection, confirm that the components you installed have a status value oftrue.NoteIf a component shows with the
component-name: {}format in thespec.componentssection of the CR, the component is not installed.
-
In the OpenShift web console, click Operators
- In the OpenShift AI dashboard, users can view the list of the installed OpenShift AI components, their corresponding source (upstream) components, and the versions of the installed components, as described in Viewing installed OpenShift AI components.
3.5.4. Viewing installed OpenShift AI components Copy linkLink copied to clipboard!
In the Red Hat OpenShift AI dashboard, you can view a list of the installed OpenShift AI components, their corresponding source (upstream) components, and the versions of the installed components.
Prerequisites
- OpenShift AI is installed in your OpenShift cluster.
Procedure
- Log in to the OpenShift AI dashboard.
-
In the top navigation bar, click the help icon (
) and then select About.
Verification
The About page shows a list of the installed OpenShift AI components along with their corresponding upstream components and upstream component versions.
Additional resources