Chapter 2. Setting up a project and storage
2.2. Setting up your data science project Copy linkLink copied to clipboard!
To implement a data science workflow, you must create a data science project (as described in the following procedure). Projects help your team to organize and work together on resources within separated namespaces. From a project you can create many workbenches, each with their own IDE environment (for example, JupyterLab), and each with their own connections and cluster storage. In addition, the workbenches can share models and data with pipelines and model servers.
Prerequisites
- You have logged in to Red Hat OpenShift AI.
Procedure
- On the navigation menu, select Data science projects. This page lists any existing projects that you have access to.
If you are using the Red Hat Developer Sandbox, it provides a default data science project (for example,
myname-dev). Select it and skip to the Verification section.If you are using your own OpenShift cluster, you can select an existing project (if any) or create a new one. Click Create project.
NoteYou can start a Jupyter notebook by clicking the Start basic workbench button, selecting a notebook image, and clicking Start server. However, in that case, it is a one-off Jupyter notebook run in isolation.
In the Create project modal, enter a display name and description.
- Click Create.
Verification
You can see your project’s initial state. Individual tabs show more information about the project components and project access permissions:
- Workbenches are instances of your development and experimentation environment. They typically contain individual development environments (IDEs), such as JupyterLab, RStudio, and Visual Studio Code.
- Pipelines contain the data science pipelines which run within the project.
- Models for quickly serving a trained model for real-time inference. You can have many model servers per data science project. One model server can host many models.
- Cluster storage is a persistent volume that retains the files and data you’re working on within a workbench. A workbench has access to one or more cluster storage instances.
- Connections contain required configuration parameters for connecting to a data source, such as an S3 object bucket.
- Permissions define which users and groups can access the project.
Next step
2.3. Storing data with connections Copy linkLink copied to clipboard!
Add connections to workbenches to connect your project to data inputs and object storage buckets. A connection is a resource that has the configuration parameters needed to connect to a data source or data sink, such as an AWS S3 object storage bucket.
For this tutorial, you run a provided script that creates the following local Minio storage buckets for you:
- My Storage - Use this bucket for storing your models and data. You can reuse this bucket and its connection for your notebooks and model servers.
- Pipelines Artifacts - Use this bucket as storage for your pipeline artifacts. When you create a pipeline server, you need a pipeline artifacts bucket. For this tutorial, create this bucket to separate it from the first storage bucket for clarity.
Although you can use one storage bucket for both storing models and data and for storing pipeline artifacts, this tutorial follows best practice and uses separate storage buckets for each purpose.
The provided script also creates a connection to each storage bucket.
To run the script that installs local MinIO storage buckets and creates connections to them, follow the steps in Running a script to install local object storage buckets and create connections.
If you want to use your own S3-compatible object storage buckets (instead of using the provided script), follow the steps in Creating connections to your own S3-compatible object storage.
2.3.1. Running a script to install local object storage buckets and create connections Copy linkLink copied to clipboard!
For convenience, run a script (provided in the following procedure) that automatically completes these tasks:
- Creates a Minio instance in your project.
- Creates two storage buckets in that Minio instance.
- Generates a random user id and password for your Minio instance.
- Creates two connections in your project, one for each bucket and both using the same credentials.
- Installs required network policies for service mesh functionality.
The guide for deploying Minio is the basis for this script.
The Minio-based Object Storage that the script creates is not meant for production usage.
If you want to connect to your own storage, see Creating connections to your own S3-compatible object storage.
Prerequisites
You must know the OpenShift resource name for your data science project so that you run the provided script in the correct project. To get the project’s resource name:
In the OpenShift AI dashboard, select Data science projects and then click the ? icon next to the project name. A text box opens with information about the project, including its resource name:
The following procedure describes how to run the script from the OpenShift console. If you are knowledgeable in OpenShift and can access the cluster from the command line, instead of following the steps in this procedure, you can use the following command to run the script:
oc apply -n <your-project-name/> -f https://github.com/rh-aiservices-bu/fraud-detection/raw/main/setup/setup-s3.yaml
oc apply -n <your-project-name/> -f https://github.com/rh-aiservices-bu/fraud-detection/raw/main/setup/setup-s3.yaml
Procedure
In the OpenShift AI dashboard, click the application launcher icon and then select the OpenShift Console option.
In the OpenShift console, click + in the top navigation bar.
Select your project from the list of projects.
Verify that you selected the correct project.
Copy the following code and paste it into the Import YAML editor.
NoteThis code gets and applies the
setup-s3-no-sa.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Create.
Verification
In the OpenShift console, there is a "Resources successfully created" message and a list of the following resources:
-
demo-setup -
demo-setup-edit -
create-s3-storage
-
In the OpenShift AI dashboard:
- Select Data science projects and then click the name of your project, Fraud detection.
Click Connections. There are two connections listed:
My StorageandPipeline Artifacts.
If your cluster uses self-signed certificates, your OpenShift AI administrator might need to configure a certificate authority (CA) to securely connect to the S3 object storage, as described in Accessing S3-compatible object storage with self-signed certificates (Self-Managed) or Accessing S3-compatible object storage with self-signed certificates (Cloud Service).
Next step
If you want to complete the pipelines section of this tutorial, go to Enabling data science pipelines.
Otherwise, skip to Creating a workbench.
2.3.2. Creating connections to your own S3-compatible object storage Copy linkLink copied to clipboard!
If you have existing S3-compatible storage buckets that you want to use for this tutorial, you must create a connection to one storage bucket for saving your data and models. If you want to complete the pipelines section of this tutorial, create another connection to a different storage bucket for saving pipeline artifacts.
If you do not have your own s3-compatible storage, or if you want to use a disposable local Minio instance instead, skip this task and follow the steps in Running a script to install local object storage buckets and create connections. The provided script automatically completes the following tasks for you: creates a Minio instance in your project, creates two storage buckets in that Minio instance, creates two connections in your project, one for each bucket and both using the same credentials, and installs required network policies for service mesh functionality.
Prerequisites
To create connections to your existing S3-compatible storage buckets, you need the following credential information for the storage buckets:
- Endpoint URL
- Access key
- Secret key
- Region
- Bucket name
If you do not have this information, contact your storage administrator.
Procedure
Create a connection for saving your data and models:
- In the OpenShift AI dashboard, navigate to the page for your data science project.
Click the Connections tab, and then click Create connection.
- In the Add connection modal, for the Connection type select S3 compatible object storage - v1.
Complete the Add connection form and name your connection My Storage. This connection is for saving your personal work, including data and models.
- Click Create.
Create a connection for saving pipeline artifacts:
NoteIf you do not intend to complete the pipelines section of the tutorial, you can skip this step.
- Click Add connection.
Complete the form and name your connection Pipeline Artifacts.
- Click Create.
Verification
In the Connections tab for the project, check to see that your connections are listed.
If your cluster uses self-signed certificates, your OpenShift AI administrator might need to provide a certificate authority (CA) to securely connect to the S3 object storage, as described in Accessing S3-compatible object storage with self-signed certificates (Self-Managed) or Accessing S3-compatible object storage with self-signed certificates (Cloud Service).
Next step
If you want to complete the pipelines section of this tutorial, go to Enabling data science pipelines.
Otherwise, skip to Creating a workbench.
2.4. Enabling data science pipelines Copy linkLink copied to clipboard!
You must prepare your tutorial environment so that you can use data science pipelines.
If you do not intend to complete the pipelines section of this tutorial you can skip this step and move on to the next section, Setting up Kueue resources.
Later in this tutorial, you implement an example pipeline by using the JupyterLab Elyra extension. With Elyra, you can create a visual end-to-end pipeline workflow that executes in OpenShift AI.
Prerequisites
- You have installed local object storage buckets and created connections, as described in Storing data with connections.
Procedure
- In the OpenShift AI dashboard, on the Fraud Detection page, click the Pipelines tab.
Click Configure pipeline server.
In the Configure pipeline server form, in the Access key field next to the key icon, click the dropdown menu and then click Pipeline Artifacts.
The Configure pipeline server form autofills with credentials for the connection.
- In the Advanced Settings section, leave the default values.
- Click Configure pipeline server.
Wait until the loading spinner disappears and Start by importing a pipeline is displayed.
ImportantYou must wait until the pipeline configuration is complete before you continue and create your workbench. If you create your workbench before the pipeline server is ready, your workbench cannot submit pipelines to it.
If you have waited more than 5 minutes, and the pipeline server configuration does not complete, you can delete the pipeline server and create it again.
You can also ask your OpenShift AI administrator to verify that they applied self-signed certificates on your cluster as described in Working with certificates (Self-Managed) or Working with certificates (Cloud Service).
Verification
- Navigate to the Pipelines tab for the project.
Next to Import pipeline, click the action menu (⋮) and then select View pipeline server configuration.
An information box opens and displays the object storage connection information for the pipeline server.
Next step
2.5. Setting up Kueue resources Copy linkLink copied to clipboard!
You must prepare your tutorial environment so that you can use Kueue for distributing training with the Training Operator.
In the Distributing training jobs with the Training Operator section of this tutorial, you implement a distributed training job by using Kueue for managing job resources. With Kueue, you can manage cluster resource quotas and how different workloads consume them.
If you are using the Red Hat Developer Sandbox, or if you do not intend to use Kueue to schedule your training jobs in the Distributing training jobs with the Training Operator section of this tutorial, skip this procedure and continue to the next section, Creating a workbench and selecting a workbench image.
Procedure
In the OpenShift AI dashboard, click the application launcher icon and then select the OpenShift Console option.
In the OpenShift console, click + in the top navigation bar.
Select your project from the list of projects.
Verify that you selected the correct project.
Copy the following code and paste it into the Import YAML editor.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Create.
Verification
In the OpenShift console, there is a "Resources successfully created" message with a list of the following resources:
-
default-flavor -
cluster-queue -
local-queue
-