Chapter 8. Installing Red Hat OpenShift AI components by using the web console
The following procedure shows how to use the OpenShift Dedicated web console to install specific components of Red Hat OpenShift AI on your cluster.
The following procedure describes how to create and configure a DataScienceCluster
object to install Red Hat OpenShift AI components as part of a new installation. However, if you upgraded from version 1 of OpenShift AI (previously OpenShift Data Science), the upgrade process automatically created a default DataScienceCluster
object. If you upgraded from a previous minor version, the upgrade process used the settings from the previous version’s DataScienceCluster
object. To inspect the DataScienceCluster
object and change the installation status of Red Hat OpenShift AI components, see Updating the installation status of Red Hat OpenShift AI components by using the web console.
Prerequisites
- To support the KServe component, you installed dependent Operators, including the Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh Operators. For more information, see Serving large language models.
- Red Hat OpenShift AI is installed as an Add-on to your Red Hat OpenShift cluster.
- You have cluster administrator privileges for your OpenShift Dedicated cluster.
Procedure
- Log in to the OpenShift Dedicated web console as a cluster administrator.
-
In the web console, click Operators
Installed Operators and then click the Red Hat OpenShift AI Operator. Create a
DataScienceCluster
object to install OpenShift AI components by performing the following actions:- Click the Data Science Cluster tab.
- Click Create DataScienceCluster.
For Configure via, select YAML view.
An embedded YAML editor opens showing a default custom resource (CR) for the
DataScienceCluster
object.In the
spec.components
section of the CR, for each OpenShift AI component shown, set the value of themanagementState
field to eitherManaged
orRemoved
. These values are defined as follows:- Managed
- The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so.
- Removed
- The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it.
Important- To learn how to install the KServe component, which is used by the single model serving platform to serve large language models, see Serving large language models.
- The CodeFlare and KubeRay components are Technology Preview features only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
- To learn how to configure the distributed workloads feature that uses the CodeFlare and KubeRay components, see Configuring distributed workloads.
- Click Create.
Verification
On the DataScienceClusters page, click the
default-dsc
object and then perform the following actions:- Select the YAML tab.
-
In the
installedComponents
section, confirm that the components you installed have a status value oftrue
.
In the OpenShift Dedicated web console, click Workloads
Pods and then perform the following actions: -
In the Project list at the top of the page, select the
redhat-ods-applications
project. - In the project, confirm that there are running pods for each of the OpenShift AI components that you installed.
-
In the Project list at the top of the page, select the