Chapter 4. Configuring cluster storage


4.1. About persistent storage

OpenShift AI uses persistent storage to support workbenches, project data, and model training.

Persistent storage is provisioned through OpenShift storage classes and persistent volumes. Volume provisioning and data access are determined by access modes.

Understanding storage classes and access modes can help you choose the right storage for your use case and avoid potential risks when sharing data across multiple workbenches.

4.1.1. Storage classes in OpenShift AI

Storage classes in OpenShift AI are available from the underlying OpenShift cluster. A storage class defines how persistent volumes are provisioned, including which storage backend is used and what access modes the provisioned volumes can support. For more information, see Dynamic provisioning in the OpenShift documentation.

Cluster administrators create and configure storage classes in the OpenShift cluster. These storage classes provision persistent volumes that support one or more access modes, depending on the capabilities of the storage backend. OpenShift AI administrators then enable specific storage classes and access modes for use in OpenShift AI.

When adding cluster storage to your project or workbench, you can choose from any enabled storage classes and access modes.

4.1.2. Access modes

Storage classes create persistent volumes that can support different access modes, depending on the storage backend. Access modes control how a volume can be mounted and used by one or more workbenches. If a storage class allows more than one access mode, you can select the one that best fits your needs when you request storage. All persistent volumes support ReadWriteOnce (RWO) by default.

Expand
Access modeDescription

ReadWriteOnce (RWO) (Default)

The storage can be attached to a single workbench or pod at a time and is ideal for most individual workloads. RWO is always enabled by default and cannot be disabled by the administrator.

ReadWriteMany (RWX)

The storage can be attached to many workbenches simultaneously. RWX enables shared data access, but can introduce data risks.

ReadOnlyMany (ROX)

The storage can be attached to many workbenches as read-only. ROX is useful for sharing reference data without allowing changes.

ReadWriteOncePod (RWOP)

The storage can be attached to a single pod on a single node with read-write permissions. RWOP is similar to RWO but includes additional node-level restrictions.

Note

You can enable access modes that are required. A warning is displayed if you request an access mode with unknown support, but you can continue to select Save to create the storage class with the selected access mode.

4.1.2.1. Using shared storage (RWX)

The ReadWriteMany (RWX) access mode allows multiple workbenches to access and write to the same storage volume at the same time. Use RWX access mode for collaborative work where multiple users need to access shared datasets or project files.

However, shared storage introduces several risks:

  • Data corruption or data loss: If multiple workbenches modify the same part of a file simultaneously, the data can become corrupted or lost. Ensure your applications or workflows are designed to safely handle shared access, for example, by using file locking or database transactions.
  • Security and privacy: If a workbench with access to shared storage is compromised, all data on that volume might be at risk. Only share sensitive data with trusted workbenches and users.

To use shared storage safely:

  • Ensure that your tools or workflows are designed to work with shared storage and can manage simultaneous writes. For example, use databases or distributed data processing frameworks.
  • Be cautious with changes. Deleting or editing files affects everyone who shares the volume.
  • Back up your data regularly, which can help prevent data loss due to mistakes or misconfigurations.
  • Limit access to RWX volumes to trusted users and secure workbenches.
  • Use ReadWriteMany (RWX) only when collaboration on a shared volume is required. For most individual tasks, ReadWriteOnce (RWO) is ideal because only one workbench can write to the volume at a time.

4.2. Adding cluster storage to your project

For projects that require data to be retained, you can add cluster storage to the project. Additionally, you can also connect cluster storage to a specific project’s workbench.

Prerequisites

  • You have logged in to Red Hat OpenShift AI.
  • You have created a project that you can add cluster storage to.

Procedure

  1. From the OpenShift AI dashboard, click Projects.

    The Projects page opens.

  2. Click the name of the project that you want to add the cluster storage to.

    A project details page opens.

  3. Click the Cluster storage tab.
  4. Click Add cluster storage.

    The Add cluster storage dialog opens.

  5. In the Name field, enter a unique name for the cluster storage.
  6. Optional: In the Description field, enter a description for the cluster storage.
  7. From the Storage class list, select the type of cluster storage.

    Note

    You cannot change the storage class after you add the cluster storage to the project.

  8. For storage classes that support multiple access modes, select an Access mode to define how the volume can be accessed. For more information, see About persistent storage.

    Only the access modes that have been enabled for the storage class by your cluster and OpenShift AI administrators are visible.

  9. In the Persistent storage size section, specify a new size in gibibytes or mebibytes.
  10. Optional: If you want to connect the cluster storage to an existing workbench:

    1. In the Workbench connections section, click Add workbench.
    2. In the Name field, select an existing workbench from the list.
    3. In the Path format field, select Standard if your storage directory begins with /opt/app-root/src, otherwise select Custom.
    4. In the Mount path field, enter the path to a model or directory within a container where a volume is mounted and accessible. The path must consist of lowercase alphanumeric characters or -. Use / to indicate subdirectories.
  11. Click Add storage.

Verification

  • The cluster storage that you added is displayed on the Cluster storage tab for the project.
  • A new persistent volume claim (PVC) is created with the storage size that you defined.
  • The persistent volume claim (PVC) is visible as an attached storage on the Workbenches tab for the project.

4.3. Updating cluster storage

If your data science work requires you to change the identifying information of a project’s cluster storage or the workbench that the storage is connected to, you can update your project’s cluster storage to change these properties.

Note

You cannot directly change the storage class for cluster storage that is already configured for a workbench or project. To switch to a different storage class, you need to migrate your data to a new cluster storage instance that uses the required storage class. For more information, see Changing the storage class for an existing cluster storage instance.

Prerequisites

  • You have logged in to Red Hat OpenShift AI.
  • You have created a project that contains cluster storage.

Procedure

  1. From the OpenShift AI dashboard, click Projects.

    The Projects page opens.

  2. Click the name of the project whose storage you want to update.

    A project details page opens.

  3. Click the Cluster storage tab.
  4. Click the action menu () beside the storage that you want to update and then click Edit storage.

    The Update cluster storage page opens.

  5. Optional: Edit the Name field to change the display name for your storage.
  6. Optional: Edit the Description field to change the description of your storage.
  7. Optional: In the Persistent storage size section, specify a new size in gibibytes or mebibytes.

    Note that you can only increase the storage size. Updating the storage size restarts the workbench and makes it unavailable for a period of time that is usually proportional to the size change.

  8. Optional: If you want to connect the cluster storage to a different workbench:

    1. In the Workbench connections section, click Add workbench.
    2. In the Name field, select an existing workbench from the list.
    3. In the Path format field, select Standard if your storage directory begins with /opt/app-root/src, otherwise select Custom.
    4. In the Mount path field, enter the path to a model or directory within a container where a volume is mounted and accessible. The path must consist of lowercase alphanumeric characters or -. Use / to indicate subdirectories.
  9. Click Update storage.

If you increased the storage size, the workbench restarts and is unavailable for a period of time that is usually proportional to the size change.

Verification

  • The storage that you updated is displayed on the Cluster storage tab for the project.

When you create a workbench with cluster storage, the cluster storage is tied to a specific storage class. Later, if your data science work requires a different storage class, or if the current storage class has been deprecated, you cannot directly change the storage class on the existing cluster storage instance. Instead, you must migrate your data to a new cluster storage instance that uses the storage class that you want to use.

Prerequisites

  • You have logged in to Red Hat OpenShift AI.
  • You have created a workbench or project that contains cluster storage.

Procedure

  1. Stop the workbench with the storage class that you want to change.

    1. From the OpenShift AI dashboard, click Projects.

      The Projects page opens.

    2. Click the name of the project with the cluster storage instance that uses the storage class you want to change.

      The project details page opens.

    3. Click the Workbenches tab.
    4. In the Status column for the relevant workbench, click Stop.

      Wait until the Status column for the relevant workbench changes from Running to Stopped.

  2. Add a new cluster storage instance that uses the needed storage class.

    1. Click the Cluster storage tab.
    2. Click Add cluster storage.

      The Add cluster storage dialog opens.

    3. Enter a name for the cluster storage.
    4. Optional: Enter a description for the cluster storage.
    5. Select the needed storage class for the cluster storage.
    6. For storage classes that support multiple access modes, select an Access mode to define how the volume can be accessed. For more information, see About persistent storage.

      Only the access modes that have been enabled for the storage class by your cluster and OpenShift AI administrators are visible.

    7. Under Persistent storage size, enter a size in gibibytes or mebibytes.
    8. In the Workbench connections section, click Add workbench.
    9. In the Name field, select an existing workbench from the list.
    10. In the Path format field, select Standard if your storage directory begins with /opt/app-root/src, otherwise select Custom.
    11. In the Mount path field, enter the path to a model or directory within a container where a volume is mounted and accessible. For example, backup.
    12. Click Add storage.
  3. Copy the data from the existing cluster storage instance to the new cluster storage instance.

    1. Click the Workbenches tab.
    2. In the Status column for the relevant workbench, click Start.
    3. When the workbench status is Running, click Open to open the workbench.
    4. In JupyterLab, click File New Terminal.
    5. Copy the data to the new storage directory. Replace <mount_folder_name> with the storage directory of your new cluster storage instance.

      rsync -avO --exclude='/opt/app-root/src/__<mount_folder_name>__' /opt/app-root/src/ /opt/app-root/src/__<mount_folder_name>__/
      Copy to Clipboard Toggle word wrap

      For example:

      rsync -avO --exclude='/opt/app-root/src/backup' /opt/app-root/src/ /opt/app-root/src/backup/
      Copy to Clipboard Toggle word wrap
    6. After the data has finished copying, log out of JupyterLab.
  4. Stop the workbench.

    1. Click the Workbenches tab.
    2. In the Status column for the relevant workbench, click Stop.

      Wait until the Status column for the relevant workbench changes from Running to Stopped.

  5. Remove the original cluster storage instance from the workbench.

    1. Click the Cluster storage tab.
    2. Click the action menu () beside the existing cluster storage instance, and then click Edit storage.
    3. Under Existing connected workbenches, remove the workbench.
    4. Click Update.
  6. Update the mount folder of the new cluster storage instance by removing it and re-adding it to the workbench.

    1. On the Cluster storage tab, click the action menu () beside the new cluster storage instance, and then click Edit storage.
    2. Under Existing connected workbenches, remove the workbench.
    3. Click Update.
    4. Click the Workbenches tab.
    5. Click the action menu () beside the workbench and then click Edit workbench.
    6. In the Cluster storage section, under Use existing persistent storage, select the new cluster storage instance.
    7. Click Update workbench.
  7. Restart the workbench.

    1. Click the Workbenches tab.
    2. In the Status column for the relevant workbench, click Start.
  8. Optional: The initial cluster storage that uses the previous storage class is still visible on the Cluster storage tab. If you no longer need this cluster storage (for example, if the storage class is deprecated), you can delete it.
  9. Optional: You can delete the mount folder of your new cluster storage instance (for example, the backup folder).

Verification

  • On the Cluster storage tab for the project, the new cluster storage instance is displayed with the needed storage class in the Storage class column and the relevant workbench in the Connected workbenches column.
  • On the Workbenches tab for the project, the new cluster storage instance is displayed for the workbench in the Cluster storage section and has the mount path: /opt/app-root/src.

4.5. Deleting cluster storage from a project

You can delete cluster storage from your projects to help you free up resources and delete unwanted storage space.

Prerequisites

  • You have logged in to Red Hat OpenShift AI.
  • You have created a project with cluster storage.

Procedure

  1. From the OpenShift AI dashboard, click Projects.

    The Projects page opens.

  2. Click the name of the project that you want to delete the storage from.

    A project details page opens.

  3. Click the Cluster storage tab.
  4. Click the action menu () beside the storage that you want to delete and then click Delete storage.

    The Delete storage dialog opens.

  5. Enter the name of the storage in the text field to confirm that you intend to delete it.
  6. Click Delete storage.

Verification

  • The storage that you deleted is no longer displayed on the Cluster storage tab for the project.
  • The persistent volume (PV) and persistent volume claim (PVC) associated with the cluster storage are both permanently deleted. This data is not recoverable.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top