Chapter 6. Managing storage classes


OpenShift cluster administrators use storage classes to describe the different types of storage that is available in their cluster. These storage types can represent different quality-of-service levels, backup policies, or other custom policies set by cluster administrators.

6.1. About persistent storage

OpenShift AI uses persistent storage to support workbenches, project data, and model training.

Persistent storage is provisioned through OpenShift storage classes and persistent volumes. Volume provisioning and data access are determined by access modes.

Understanding storage classes and access modes can help you choose the right storage for your use case and avoid potential risks when sharing data across multiple workbenches.

6.1.1. Storage classes in OpenShift AI

Storage classes in OpenShift AI are available from the underlying OpenShift cluster. A storage class defines how persistent volumes are provisioned, including which storage backend is used and what access modes the provisioned volumes can support. For more information, see Dynamic provisioning in the OpenShift documentation.

Cluster administrators create and configure storage classes in the OpenShift cluster. These storage classes provision persistent volumes that support one or more access modes, depending on the capabilities of the storage backend. OpenShift AI administrators then enable specific storage classes and access modes for use in OpenShift AI.

When adding cluster storage to your project or workbench, you can choose from any enabled storage classes and access modes.

6.1.2. Access modes

Storage classes create persistent volumes that can support different access modes, depending on the storage backend. Access modes control how a volume can be mounted and used by one or more workbenches. If a storage class allows more than one access mode, you can select the one that best fits your needs when you request storage. All persistent volumes support ReadWriteOnce (RWO) by default.

Expand
Access modeDescription

ReadWriteOnce (RWO) (Default)

The storage can be attached to a single workbench or pod at a time and is ideal for most individual workloads. RWO is always enabled by default and cannot be disabled by the administrator.

ReadWriteMany (RWX)

The storage can be attached to many workbenches simultaneously. RWX enables shared data access, but can introduce data risks.

ReadOnlyMany (ROX)

The storage can be attached to many workbenches as read-only. ROX is useful for sharing reference data without allowing changes.

ReadWriteOncePod (RWOP)

The storage can be attached to a single pod on a single node with read-write permissions. RWOP is similar to RWO but includes additional node-level restrictions.

6.1.2.1. Using shared storage (RWX)

The ReadWriteMany (RWX) access mode allows multiple workbenches to access and write to the same storage volume at the same time. Use RWX access mode for collaborative work where multiple users need to access shared datasets or project files.

However, shared storage introduces several risks:

  • Data corruption or data loss: If multiple workbenches modify the same part of a file simultaneously, the data can become corrupted or lost. Ensure your applications or workflows are designed to safely handle shared access, for example, by using file locking or database transactions.
  • Security and privacy: If a workbench with access to shared storage is compromised, all data on that volume might be at risk. Only share sensitive data with trusted workbenches and users.

To use shared storage safely:

  • Ensure that your tools or workflows are designed to work with shared storage and can manage simultaneous writes. For example, use databases or distributed data processing frameworks.
  • Be cautious with changes. Deleting or editing files affects everyone who shares the volume.
  • Back up your data regularly, which can help prevent data loss due to mistakes or misconfigurations.
  • Limit access to RWX volumes to trusted users and secure workbenches.
  • Use ReadWriteMany (RWX) only when collaboration on a shared volume is required. For most individual tasks, ReadWriteOnce (RWO) is ideal because only one workbench can write to the volume at a time.

6.2. Configuring storage class settings

As an OpenShift AI administrator, you can manage the following OpenShift cluster storage class settings for use within OpenShift AI:

  • Display name
  • Description
  • Access modes
  • Whether users can use the storage class when creating or editing cluster storage

These settings do not impact the storage class within OpenShift.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.

Procedure

  1. From the OpenShift AI dashboard, click Settings Storage classes.

    The Storage classes page appears, displaying the storage classes for your cluster as defined in OpenShift.

  2. To enable or disable a storage class for users, on the row containing the storage class, click the toggle in the Enable column.
  3. To edit a storage class, on the row containing the storage class, click the action menu (⋮) and then select Edit.

    The Edit storage class details dialog opens.

  4. Optional: In the Display Name field, update the name for the storage class. This name is used only in OpenShift AI and does not impact the storage class within OpenShift.
  5. Optional: In the Description field, update the description for the storage class. This description is used only in OpenShift AI and does not impact the storage class within OpenShift.
  6. For storage classes that support multiple access modes, select an Access mode to define how the volume can be accessed. For more information, see About persistent storage.

    Only the access modes that have been enabled for the storage class by your cluster and OpenShift AI administrators are visible.

  7. Click Save.

Verification

  • If you enabled a storage class, the storage class is available for selection when a user adds cluster storage to a data science project or workbench.
  • If you disabled a storage class, the storage class is not available for selection when a user adds cluster storage to a data science project or workbench.
  • If you edited a storage class name, the updated storage class name is displayed when a user adds cluster storage to a data science project or workbench.

As an OpenShift AI administrator, you can configure the default storage class for OpenShift AI to be different from the default storage class in OpenShift.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.

Procedure

  1. From the OpenShift AI dashboard, click Settings Storage classes.

    The Storage classes page appears, displaying the storage classes for your cluster as defined in OpenShift.

  2. If the storage class that you want to set as the default is not enabled, on the row containing the storage class, click the toggle in the Enable column.
  3. To set a storage class as the default for OpenShift AI, on the row containing the storage class, select Set as default.

Verification

  • When a user adds cluster storage to a data science project or workbench, the default storage class that you configured is automatically selected.

6.4. Overview of object storage endpoints

To ensure correct configuration of object storage in OpenShift AI, you must format endpoints correctly for the different types of object storage supported. These instructions are for formatting endpoints for Amazon S3, MinIO, or other S3-compatible storage solutions, minimizing configuration errors and ensuring compatibility.

Important

Properly formatted endpoints enable connectivity and reduce the risk of misconfigurations. Use the appropriate endpoint format for your object storage type. Improper formatting might cause connection errors or restrict access to storage resources.

6.4.1. MinIO (On-Cluster)

For on-cluster MinIO instances, use a local endpoint URL format. Ensure the following when configuring MinIO endpoints:

  • Prefix the endpoint with http:// or https:// depending on your MinIO security setup.
  • Include the cluster IP or hostname, followed by the port number if specified.
  • Use a port number if your MinIO instance requires one (default is typically 9000).

Example:

http://minio-cluster.local:9000
Copy to Clipboard Toggle word wrap
Note

Verify that the MinIO instance is accessible within the cluster by checking your cluster DNS settings and network configurations.

6.4.2. Amazon S3

When configuring endpoints for Amazon S3, use region-specific URLs. Amazon S3 endpoints generally follow this format:

  • Prefix the endpoint with https://.
  • Format as <bucket-name>.s3.<region>.amazonaws.com, where <bucket-name> is the name of your S3 bucket, and <region> is the AWS region code (for example, us-west-1, eu-central-1).

Example:

https://my-bucket.s3.us-west-2.amazonaws.com
Copy to Clipboard Toggle word wrap
Note

For improved security and compliance, ensure that your Amazon S3 bucket is in the correct region.

6.4.3. Other S3-Compatible Object Stores

For S3-compatible storage solutions other than Amazon S3, follow the specific endpoint format required by your provider. Generally, these endpoints include the following items:

  • The provider base URL, prefixed with https://.
  • The bucket name and region parameters as specified by the provider.
  • Review the documentation from your S3-compatible provider to confirm required endpoint formats.
  • Replace placeholder values like <bucket-name> and <region> with your specific configuration details.
Warning

Incorrectly formatted endpoints for S3-compatible providers might lead to access denial. Always verify the format in your storage provider documentation to ensure compatibility.

6.4.4. Verification and Troubleshooting

After configuring endpoints, verify connectivity by performing a test upload or accessing the object storage directly through the OpenShift AI dashboard. For troubleshooting, check the following items:

  • Network Accessibility: Confirm that the endpoint is reachable from your OpenShift AI cluster.
  • Authentication: Ensure correct access credentials for each storage type.
  • Endpoint Accuracy: Double-check the endpoint URL format for any typos or missing components.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat