Chapter 5. Using Container Storage Interface (CSI)


5.1. Configuring CSI volumes

The Container Storage Interface (CSI) allows OpenShift Container Platform to consume storage from storage back ends that implement the CSI interface as persistent storage.

Note

OpenShift Container Platform 4.11 supports version 1.5.0 of the CSI specification.

5.1.1. CSI Architecture

CSI drivers are typically shipped as container images. These containers are not aware of OpenShift Container Platform where they run. To use CSI-compatible storage back end in OpenShift Container Platform, the cluster administrator must deploy several components that serve as a bridge between OpenShift Container Platform and the storage driver.

The following diagram provides a high-level overview about the components running in pods in the OpenShift Container Platform cluster.

Architecture of CSI components

It is possible to run multiple CSI drivers for different storage back ends. Each driver needs its own external controllers deployment and daemon set with the driver and CSI registrar.

5.1.1.1. External CSI controllers

External CSI Controllers is a deployment that deploys one or more pods with five containers:

  • The snapshotter container watches VolumeSnapshot and VolumeSnapshotContent objects and is responsible for the creation and deletion of VolumeSnapshotContent object.
  • The resizer container is a sidecar container that watches for PersistentVolumeClaim updates and triggers ControllerExpandVolume operations against a CSI endpoint if you request more storage on PersistentVolumeClaim object.
  • An external CSI attacher container translates attach and detach calls from OpenShift Container Platform to respective ControllerPublish and ControllerUnpublish calls to the CSI driver.
  • An external CSI provisioner container that translates provision and delete calls from OpenShift Container Platform to respective CreateVolume and DeleteVolume calls to the CSI driver.
  • A CSI driver container

The CSI attacher and CSI provisioner containers communicate with the CSI driver container using UNIX Domain Sockets, ensuring that no CSI communication leaves the pod. The CSI driver is not accessible from outside of the pod.

Note

attach, detach, provision, and delete operations typically require the CSI driver to use credentials to the storage backend. Run the CSI controller pods on infrastructure nodes so the credentials are never leaked to user processes, even in the event of a catastrophic security breach on a compute node.

Note

The external attacher must also run for CSI drivers that do not support third-party attach or detach operations. The external attacher will not issue any ControllerPublish or ControllerUnpublish operations to the CSI driver. However, it still must run to implement the necessary OpenShift Container Platform attachment API.

5.1.1.2. CSI driver daemon set

The CSI driver daemon set runs a pod on every node that allows OpenShift Container Platform to mount storage provided by the CSI driver to the node and use it in user workloads (pods) as persistent volumes (PVs). The pod with the CSI driver installed contains the following containers:

  • A CSI driver registrar, which registers the CSI driver into the openshift-node service running on the node. The openshift-node process running on the node then directly connects with the CSI driver using the UNIX Domain Socket available on the node.
  • A CSI driver.

The CSI driver deployed on the node should have as few credentials to the storage back end as possible. OpenShift Container Platform will only use the node plugin set of CSI calls such as NodePublish/NodeUnpublish and NodeStage/NodeUnstage, if these calls are implemented.

5.1.2. CSI drivers supported by OpenShift Container Platform

OpenShift Container Platform installs certain CSI drivers by default, giving users storage options that are not possible with in-tree volume plugins.

To create CSI-provisioned persistent volumes that mount to these supported storage assets, OpenShift Container Platform installs the necessary CSI driver Operator, the CSI driver, and the required storage class by default. For more details about the default namespace of the Operator and driver, see the documentation for the specific CSI Driver Operator.

The following table describes the CSI drivers that are installed with OpenShift Container Platform and which CSI features they support, such as volume snapshots, cloning, and resize.

Table 5.1. Supported CSI drivers and features in OpenShift Container Platform
CSI driverCSI volume snapshotsCSI cloningCSI resize

AliCloud Disk

 ✅

 -

 ✅

AWS EBS

 ✅

 -

 ✅

AWS EFS

 -

 -

 -

Google Cloud Platform (GCP) persistent disk (PD)

 ✅

 ✅

 ✅

IBM VPC Block

 -

 -

 ✅

Microsoft Azure Disk

 ✅

 ✅

 ✅

Microsoft Azure Stack Hub

 ✅

 ✅

 ✅

Microsoft Azure File

 -

 -

 ✅

OpenStack Cinder

 ✅

 ✅

 ✅

OpenShift Data Foundation

 ✅

 ✅

 ✅

OpenStack Manila

 ✅

 -

 -

Red Hat Virtualization (oVirt)

 -

 -

 ✅

VMware vSphere

 ✅[1]

 -

 ✅[2]

1.

  • Requires vSphere version 7.0 Update 3 or later for both vCenter Server and ESXi.
  • Does not support fileshare volumes.

2.

  • Offline volume expansion: minimum required vSphere version is 6.7 Update 3 P06
  • Online volume expansion: minimum required vSphere version is 7.0 Update 2.
Important

If your CSI driver is not listed in the preceding table, you must follow the installation instructions provided by your CSI storage vendor to use their supported CSI features.

5.1.3. Dynamic provisioning

Dynamic provisioning of persistent storage depends on the capabilities of the CSI driver and underlying storage back end. The provider of the CSI driver should document how to create a storage class in OpenShift Container Platform and the parameters available for configuration.

The created storage class can be configured to enable dynamic provisioning.

Procedure

  • Create a default storage class that ensures all PVCs that do not require any special storage class are provisioned by the installed CSI driver.

    # oc create -f - << EOF
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: <storage-class> 1
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
    provisioner: <provisioner-name> 2
    parameters:
    EOF
    1
    The name of the storage class that will be created.
    2
    The name of the CSI driver that has been installed

5.1.4. Example using the CSI driver

The following example installs a default MySQL template without any changes to the template.

Prerequisites

  • The CSI driver has been deployed.
  • A storage class has been created for dynamic provisioning.

Procedure

  • Create the MySQL template:

    # oc new-app mysql-persistent

    Example output

    --> Deploying template "openshift/mysql-persistent" to project default
    ...

    # oc get pvc

    Example output

    NAME              STATUS    VOLUME                                   CAPACITY
    ACCESS MODES   STORAGECLASS   AGE
    mysql             Bound     kubernetes-dynamic-pv-3271ffcb4e1811e8   1Gi
    RWO            cinder         3s

5.2. CSI inline ephemeral volumes

Container Storage Interface (CSI) inline ephemeral volumes allow you to define a Pod spec that creates inline ephemeral volumes when a pod is deployed and delete them when a pod is destroyed.

This feature is only available with supported Container Storage Interface (CSI) drivers.

Important

CSI inline ephemeral volumes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

5.2.1. Overview of CSI inline ephemeral volumes

Traditionally, volumes that are backed by Container Storage Interface (CSI) drivers can only be used with a PersistentVolume and PersistentVolumeClaim object combination.

This feature allows you to specify CSI volumes directly in the Pod specification, rather than in a PersistentVolume object. Inline volumes are ephemeral and do not persist across pod restarts.

5.2.1.1. Support limitations

By default, OpenShift Container Platform supports CSI inline ephemeral volumes with these limitations:

  • Support is only available for CSI drivers. In-tree and FlexVolumes are not supported.
  • The Shared Resource CSI Driver supports inline ephemeral volumes as a Technology Preview feature.
  • Community or storage vendors provide other CSI drivers that support these volumes. Follow the installation instructions provided by the CSI driver provider.

CSI drivers might not have implemented the inline volume functionality, including Ephemeral capacity. For details, see the CSI driver documentation.

Important

Shared Resource CSI Driver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

5.2.2. Embedding a CSI inline ephemeral volume in the pod specification

You can embed a CSI inline ephemeral volume in the Pod specification in OpenShift Container Platform. At runtime, nested inline volumes follow the ephemeral lifecycle of their associated pods so that the CSI driver handles all phases of volume operations as pods are created and destroyed.

Procedure

  1. Create the Pod object definition and save it to a file.
  2. Embed the CSI inline ephemeral volume in the file.

    my-csi-app.yaml

    kind: Pod
    apiVersion: v1
    metadata:
      name: my-csi-app
    spec:
      containers:
        - name: my-frontend
          image: busybox
          volumeMounts:
          - mountPath: "/data"
            name: my-csi-inline-vol
          command: [ "sleep", "1000000" ]
      volumes: 1
        - name: my-csi-inline-vol
          csi:
            driver: inline.storage.kubernetes.io
            volumeAttributes:
              foo: bar

    1
    The name of the volume that is used by pods.
  3. Create the object definition file that you saved in the previous step.

    $ oc create -f my-csi-app.yaml

5.3. Shared Resource CSI Driver Operator

As a cluster administrator, you can use the Shared Resource CSI Driver in OpenShift Container Platform to provision inline ephemeral volumes that contain the contents of Secret or ConfigMap objects. This way, pods and other Kubernetes types that expose volume mounts, and OpenShift Container Platform Builds can securely use the contents of those objects across potentially any namespace in the cluster. To accomplish this, there are currently two types of shared resources: a SharedSecret custom resource for Secret objects, and a SharedConfigMap custom resource for ConfigMap objects.

Important

The Shared Resource CSI Driver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Note

To enable the Shared Resource CSI Driver, you must enable features using feature gates

5.3.1. About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.

5.3.2. Sharing secrets across namespaces

To share a secret across namespaces in a cluster, you create a SharedSecret custom resource (CR) instance for the Secret object that you want to share.

Prerequisites

You must have permission to perform the following actions:

  • Create instances of the sharedsecrets.sharedresource.openshift.io custom resource definition (CRD) at a cluster-scoped level.
  • Manage roles and role bindings across the namespaces in the cluster to control which users can get, list, and watch those instances.
  • Manage roles and role bindings to control whether the service account specified by a pod can mount a Container Storage Interface (CSI) volume that references the SharedSecret CR instance you want to use.
  • Access the namespaces that contain the Secrets you want to share.

Procedure

  • Create a SharedSecret CR instance for the Secret object you want to share across namespaces in the cluster:

    $ oc apply -f - <<EOF
    apiVersion: sharedresource.openshift.io/v1alpha1
    kind: SharedSecret
    metadata:
      name: my-share
    spec:
      secretRef:
        name: <name of secret>
        namespace: <namespace of secret>
    EOF

5.3.3. Using a SharedSecret instance in a pod

To access a SharedSecret custom resource (CR) instance from a pod, you grant a given service account RBAC permissions to use that SharedSecret CR instance.

Prerequisites

  • You have created a SharedSecret CR instance for the secret you want to share across namespaces in the cluster.
  • You must have permission to perform the following actions

    • Create build configs and start builds.
    • Discover which SharedSecret CR instances are available by entering the oc get sharedsecrets command and getting a non-empty list back.
    • Determine if the builder service accounts available to you in your namespace are allowed to use the given SharedSecret CR instance. That is, you can run oc adm policy who-can use <identifier of specific SharedSecret> to see if the builder service account in your namespace is listed.
Note

If neither of the last two prerequisites in this list are met, create, or ask someone to create, the necessary role-based access control (RBAC) so that you can discover SharedSecret CR instances and enable service accounts to use SharedSecret CR instances.

Procedure

  1. Grant a given service account RBAC permissions to use the SharedSecret CR instance in its pod by using oc apply with YAML content:

    Note

    Currently, kubectl and oc have hard-coded special case logic restricting the use verb to roles centered around pod security. Therefore, you cannot use oc create role …​ to create the role needed for consuming SharedSecret CR instances.

    $ oc apply -f - <<EOF
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: shared-resource-my-share
      namespace: my-namespace
    rules:
      - apiGroups:
          - sharedresource.openshift.io
        resources:
          - sharedsecrets
        resourceNames:
          - my-share
        verbs:
          - use
    EOF
  2. Create the RoleBinding associated with the role by using the oc command:

    $ oc create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder
  3. Access the SharedSecret CR instance from a pod:

    $ oc apply -f - <<EOF
    kind: Pod
    apiVersion: v1
    metadata:
      name: my-app
      namespace: my-namespace
    spec:
      serviceAccountName: default
    
    # containers omitted …. Follow standard use of ‘volumeMounts’ for referencing your shared resource volume
    
        volumes:
        - name: my-csi-volume
          csi:
            readOnly: true
            driver: csi.sharedresource.openshift.io
            volumeAttributes:
              sharedSecret: my-share
    
    EOF

5.3.4. Sharing a config map across namespaces

To share a config map across namespaces in a cluster, you create a SharedConfigMap custom resource (CR) instance for that config map.

Prerequisites

You must have permission to perform the following actions:

  • Create instances of the sharedconfigmaps.sharedresource.openshift.io custom resource definition (CRD) at a cluster-scoped level.
  • Manage roles and role bindings across the namespaces in the cluster to control which users can get, list, and watch those instances.
  • Manage roles and role bindings across the namespaces in the cluster to control which service accounts in pods that mount your Container Storage Interface (CSI) volume can use those instances.
  • Access the namespaces that contain the Secrets you want to share.

Procedure

  1. Create a SharedConfigMap CR instance for the config map that you want to share across namespaces in the cluster:

    $ oc apply -f - <<EOF
    apiVersion: sharedresource.openshift.io/v1alpha1
    kind: SharedConfigMap
    metadata:
      name: my-share
    spec:
      configMapRef:
        name: <name of configmap>
        namespace: <namespace of configmap>
    EOF

5.3.5. Using a SharedConfigMap instance in a pod

Next steps

To access a SharedConfigMap custom resource (CR) instance from a pod, you grant a given service account RBAC permissions to use that SharedConfigMap CR instance.

Prerequisites

  • You have created a SharedConfigMap CR instance for the config map that you want to share across namespaces in the cluster.
  • You must have permission to perform the following actions:

    • Create build configs and start builds.
    • Discover which SharedConfigMap CR instances are available by entering the oc get sharedconfigmaps command and getting a non-empty list back.
    • Determine if the builder service accounts available to you in your namespace are allowed to use the given SharedSecret CR instance. That is, you can run oc adm policy who-can use <identifier of specific SharedSecret> to see if the builder service account in your namespace is listed.
Note

If neither of the last two prerequisites in this list are met, create, or ask someone to create, the necessary role-based access control (RBAC) so that you can discover SharedConfigMap CR instances and enable service accounts to use SharedConfigMap CR instances.

Procedure

  1. Grant a given service account RBAC permissions to use the SharedConfigMap CR instance in its pod by using oc apply with YAML content.

    Note

    Currently, kubectl and oc have hard-coded special case logic restricting the use verb to roles centered around pod security. Therefore, you cannot use oc create role …​ to create the role needed for consuming a SharedConfigMap CR instance.

    $ oc apply -f - <<EOF
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: shared-resource-my-share
      namespace: my-namespace
    rules:
      - apiGroups:
          - sharedresource.openshift.io
        resources:
          - sharedconfigmaps
        resourceNames:
          - my-share
        verbs:
          - use
    EOF
  2. Create the RoleBinding associated with the role by using the oc command:

    oc create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder
  3. Access the SharedConfigMap CR instance from a pod:

    $ oc apply -f - <<EOF
    kind: Pod
    apiVersion: v1
    metadata:
      name: my-app
      namespace: my-namespace
    spec:
      serviceAccountName: default
    
    # containers omitted …. Follow standard use of ‘volumeMounts’ for referencing your shared resource volume
    
        volumes:
        - name: my-csi-volume
          csi:
            readOnly: true
            driver: csi.sharedresource.openshift.io
            volumeAttributes:
              sharedConfigMap: my-share
    
    EOF

5.3.6. Additional support limitations for the Shared Resource CSI Driver

The Shared Resource CSI Driver has the following noteworthy limitations:

  • The driver is subject to the limitations of Container Storage Interface (CSI) inline ephemeral volumes.
  • The value of the readOnly field must be true. Otherwise, on volume provisioning during pod startup, the driver returns an error to the kubelet. This limitation is in keeping with proposed best practices for the upstream Kubernetes CSI Driver to apply SELinux labels to associated volumes.
  • The driver ignores the FSType field because it only supports tmpfs volumes.
  • The driver ignores the NodePublishSecretRef field. Instead, it uses SubjectAccessReviews with the use verb to evaluate whether a pod can obtain a volume that contains SharedSecret or SharedConfigMap custom resource (CR) instances.

5.3.7. Additional details about VolumeAttributes on shared resource pod volumes

The following attributes affect shared resource pod volumes in various ways:

  • The refreshResource attribute in the volumeAttributes properties.
  • The refreshResources attribute in the Shared Resource CSI Driver configuration.
  • The sharedSecret and sharedConfigMap attributes in the volumeAttributes properties.

5.3.7.1. The refreshResource attribute

The Shared Resource CSI Driver honors the refreshResource attribute in volumeAttributes properties of the volume. This attribute controls whether updates to the contents of the underlying Secret or ConfigMap object are copied to the volume after the volume is initially provisioned as part of pod startup. The default value of refreshResource is true, which means that the contents are updated.

Important

If the Shared Resource CSI Driver configuration has disabled the refreshing of both the shared SharedSecret and SharedConfigMap custom resource (CR) instances, then the refreshResource attribute in the volumeAttribute properties has no effect. The intent of this attribute is to disable refresh for specific volume mounts when refresh is generally allowed.

5.3.7.2. The refreshResources attribute

You can use a global switch to enable or disable refreshing of shared resources. This switch is the refreshResources attribute in the csi-driver-shared-resource-config config map for the Shared Resource CSI Driver, which you can find in the openshift-cluster-csi-drivers namespace. If you set this refreshResources attribute to false, none of the Secret or ConfigMap object-related content stored in the volume is updated after the initial provisioning of the volume.

Important

Using this Shared Resource CSI Driver configuration to disable refreshing affects all the cluster’s volume mounts that use the Shared Resource CSI Driver, regardless of the refreshResource attribute in the volumeAttributes properties of any of those volumes.

5.3.7.3. Validation of volumeAttributes before provisioning a shared resource volume for a pod

In the volumeAttributes of a single volume, you must set either a sharedSecret or a sharedConfigMap attribute to the value of a SharedSecret or a SharedConfigMap CS instance. Otherwise, when the volume is provisioned during pod startup, a validation checks the volumeAttributes of that volume and returns an error to the kubelet under the following conditions:

  • Both sharedSecret and sharedConfigMap attributes have specified values.
  • Neither sharedSecret nor sharedConfigMap attributes have specified values.
  • The value of the sharedSecret or sharedConfigMap attribute does not correspond to the name of a SharedSecret or SharedConfigMap CR instance on the cluster.

5.3.8. Integration between shared resources, Insights Operator, and OpenShift Container Platform Builds

Integration between shared resources, Insights Operator, and OpenShift Container Platform Builds makes using Red Hat subscriptions (RHEL entitlements) easier in OpenShift Container Platform Builds.

Previously, in OpenShift Container Platform 4.9.x and earlier, you manually imported your credentials and copied them to each project or namespace where you were running builds.

Now, in OpenShift Container Platform 4.10 and later, OpenShift Container Platform Builds can use Red Hat subscriptions (RHEL entitlements) by referencing shared resources and the simple content access feature provided by Insights Operator:

  • The simple content access feature imports your subscription credentials to a well-known Secret object. See the links in the following "Additional resources" section.
  • The cluster administrator creates a SharedSecret custom resource (CR) instance around that Secret object and grants permission to particular projects or namespaces. In particular, the cluster administrator gives the builder service account permission to use that SharedSecret CR instance.
  • Builds that run within those projects or namespaces can mount a CSI Volume that references the SharedSecret CR instance and its entitled RHEL content.

5.4. CSI volume snapshots

This document describes how to use volume snapshots with supported Container Storage Interface (CSI) drivers to help protect against data loss in OpenShift Container Platform. Familiarity with persistent volumes is suggested.

5.4.1. Overview of CSI volume snapshots

A snapshot represents the state of the storage volume in a cluster at a particular point in time. Volume snapshots can be used to provision a new volume.

OpenShift Container Platform supports Container Storage Interface (CSI) volume snapshots by default. However, a specific CSI driver is required.

With CSI volume snapshots, a cluster administrator can:

  • Deploy a third-party CSI driver that supports snapshots.
  • Create a new persistent volume claim (PVC) from an existing volume snapshot.
  • Take a snapshot of an existing PVC.
  • Restore a snapshot as a different PVC.
  • Delete an existing volume snapshot.

With CSI volume snapshots, an app developer can:

  • Use volume snapshots as building blocks for developing application- or cluster-level storage backup solutions.
  • Rapidly rollback to a previous development version.
  • Use storage more efficiently by not having to make a full copy each time.

Be aware of the following when using volume snapshots:

  • Support is only available for CSI drivers. In-tree and FlexVolumes are not supported.
  • OpenShift Container Platform only ships with select CSI drivers. For CSI drivers that are not provided by an OpenShift Container Platform Driver Operator, it is recommended to use the CSI drivers provided by community or storage vendors. Follow the installation instructions furnished by the CSI driver provider.
  • CSI drivers may or may not have implemented the volume snapshot functionality. CSI drivers that have provided support for volume snapshots will likely use the csi-external-snapshotter sidecar. See documentation provided by the CSI driver for details.

5.4.2. CSI snapshot controller and sidecar

OpenShift Container Platform provides a snapshot controller that is deployed into the control plane. In addition, your CSI driver vendor provides the CSI snapshot sidecar as a helper container that is installed during the CSI driver installation.

The CSI snapshot controller and sidecar provide volume snapshotting through the OpenShift Container Platform API. These external components run in the cluster.

The external controller is deployed by the CSI Snapshot Controller Operator.

5.4.2.1. External controller

The CSI snapshot controller binds VolumeSnapshot and VolumeSnapshotContent objects. The controller manages dynamic provisioning by creating and deleting VolumeSnapshotContent objects.

5.4.2.2. External sidecar

Your CSI driver vendor provides the csi-external-snapshotter sidecar. This is a separate helper container that is deployed with the CSI driver. The sidecar manages snapshots by triggering CreateSnapshot and DeleteSnapshot operations. Follow the installation instructions provided by your vendor.

5.4.3. About the CSI Snapshot Controller Operator

The CSI Snapshot Controller Operator runs in the openshift-cluster-storage-operator namespace. It is installed by the Cluster Version Operator (CVO) in all clusters by default.

The CSI Snapshot Controller Operator installs the CSI snapshot controller, which runs in the openshift-cluster-storage-operator namespace.

5.4.3.1. Volume snapshot CRDs

During OpenShift Container Platform installation, the CSI Snapshot Controller Operator creates the following snapshot custom resource definitions (CRDs) in the snapshot.storage.k8s.io/v1 API group:

VolumeSnapshotContent

A snapshot taken of a volume in the cluster that has been provisioned by a cluster administrator.

Similar to the PersistentVolume object, the VolumeSnapshotContent CRD is a cluster resource that points to a real snapshot in the storage back end.

For manually pre-provisioned snapshots, a cluster administrator creates a number of VolumeSnapshotContent CRDs. These carry the details of the real volume snapshot in the storage system.

The VolumeSnapshotContent CRD is not namespaced and is for use by a cluster administrator.

VolumeSnapshot

Similar to the PersistentVolumeClaim object, the VolumeSnapshot CRD defines a developer request for a snapshot. The CSI Snapshot Controller Operator runs the CSI snapshot controller, which handles the binding of a VolumeSnapshot CRD with an appropriate VolumeSnapshotContent CRD. The binding is a one-to-one mapping.

The VolumeSnapshot CRD is namespaced. A developer uses the CRD as a distinct request for a snapshot.

VolumeSnapshotClass

Allows a cluster administrator to specify different attributes belonging to a VolumeSnapshot object. These attributes may differ among snapshots taken of the same volume on the storage system, in which case they would not be expressed by using the same storage class of a persistent volume claim.

The VolumeSnapshotClass CRD defines the parameters for the csi-external-snapshotter sidecar to use when creating a snapshot. This allows the storage back end to know what kind of snapshot to dynamically create if multiple options are supported.

Dynamically provisioned snapshots use the VolumeSnapshotClass CRD to specify storage-provider-specific parameters to use when creating a snapshot.

The VolumeSnapshotContentClass CRD is not namespaced and is for use by a cluster administrator to enable global configuration options for their storage back end.

5.4.4. Volume snapshot provisioning

There are two ways to provision snapshots: dynamically and manually.

5.4.4.1. Dynamic provisioning

Instead of using a preexisting snapshot, you can request that a snapshot be taken dynamically from a persistent volume claim. Parameters are specified using a VolumeSnapshotClass CRD.

5.4.4.2. Manual provisioning

As a cluster administrator, you can manually pre-provision a number of VolumeSnapshotContent objects. These carry the real volume snapshot details available to cluster users.

5.4.5. Creating a volume snapshot

When you create a VolumeSnapshot object, OpenShift Container Platform creates a volume snapshot.

Prerequisites

  • Logged in to a running OpenShift Container Platform cluster.
  • A PVC created using a CSI driver that supports VolumeSnapshot objects.
  • A storage class to provision the storage back end.
  • No pods are using the persistent volume claim (PVC) that you want to take a snapshot of.

    Note

    Do not create a volume snapshot of a PVC if a pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Be sure to first tear down a running pod to ensure consistent snapshots.

Procedure

To dynamically create a volume snapshot:

  1. Create a file with the VolumeSnapshotClass object described by the following YAML:

    volumesnapshotclass.yaml

    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshotClass
    metadata:
      name: csi-hostpath-snap
    driver: hostpath.csi.k8s.io 1
    deletionPolicy: Delete

    1
    The name of the CSI driver that is used to create snapshots of this VolumeSnapshotClass object. The name must be the same as the Provisioner field of the storage class that is responsible for the PVC that is being snapshotted.
    Note

    Depending on the driver that you used to configure persistent storage, additional parameters might be required. You can also use an existing VolumeSnapshotClass object.

  2. Create the object you saved in the previous step by entering the following command:

    $ oc create -f volumesnapshotclass.yaml
  3. Create a VolumeSnapshot object:

    volumesnapshot-dynamic.yaml

    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: mysnap
    spec:
      volumeSnapshotClassName: csi-hostpath-snap 1
      source:
        persistentVolumeClaimName: myclaim 2

    1
    The request for a particular class by the volume snapshot. If the volumeSnapshotClassName setting is absent and there is a default volume snapshot class, a snapshot is created with the default volume snapshot class name. But if the field is absent and no default volume snapshot class exists, then no snapshot is created.
    2
    The name of the PersistentVolumeClaim object bound to a persistent volume. This defines what you want to create a snapshot of. Required for dynamically provisioning a snapshot.
  4. Create the object you saved in the previous step by entering the following command:

    $ oc create -f volumesnapshot-dynamic.yaml

To manually provision a snapshot:

  1. Provide a value for the volumeSnapshotContentName parameter as the source for the snapshot, in addition to defining volume snapshot class as shown above.

    volumesnapshot-manual.yaml

    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: snapshot-demo
    spec:
      source:
        volumeSnapshotContentName: mycontent 1

    1
    The volumeSnapshotContentName parameter is required for pre-provisioned snapshots.
  2. Create the object you saved in the previous step by entering the following command:

    $ oc create -f volumesnapshot-manual.yaml

Verification

After the snapshot has been created in the cluster, additional details about the snapshot are available.

  1. To display details about the volume snapshot that was created, enter the following command:

    $ oc describe volumesnapshot mysnap

    The following example displays details about the mysnap volume snapshot:

    volumesnapshot.yaml

    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: mysnap
    spec:
      source:
        persistentVolumeClaimName: myclaim
      volumeSnapshotClassName: csi-hostpath-snap
    status:
      boundVolumeSnapshotContentName: snapcontent-1af4989e-a365-4286-96f8-d5dcd65d78d6 1
      creationTime: "2020-01-29T12:24:30Z" 2
      readyToUse: true 3
      restoreSize: 500Mi

    1
    The pointer to the actual storage content that was created by the controller.
    2
    The time when the snapshot was created. The snapshot contains the volume content that was available at this indicated time.
    3
    If the value is set to true, the snapshot can be used to restore as a new PVC.
    If the value is set to false, the snapshot was created. However, the storage back end needs to perform additional tasks to make the snapshot usable so that it can be restored as a new volume. For example, Amazon Elastic Block Store data might be moved to a different, less expensive location, which can take several minutes.
  2. To verify that the volume snapshot was created, enter the following command:

    $ oc get volumesnapshotcontent

    The pointer to the actual content is displayed. If the boundVolumeSnapshotContentName field is populated, a VolumeSnapshotContent object exists and the snapshot was created.

  3. To verify that the snapshot is ready, confirm that the VolumeSnapshot object has readyToUse: true.

5.4.6. Deleting a volume snapshot

You can configure how OpenShift Container Platform deletes volume snapshots.

Procedure

  1. Specify the deletion policy that you require in the VolumeSnapshotClass object, as shown in the following example:

    volumesnapshotclass.yaml

    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshotClass
    metadata:
      name: csi-hostpath-snap
    driver: hostpath.csi.k8s.io
    deletionPolicy: Delete 1

    1
    When deleting the volume snapshot, if the Delete value is set, the underlying snapshot is deleted along with the VolumeSnapshotContent object. If the Retain value is set, both the underlying snapshot and VolumeSnapshotContent object remain.
    If the Retain value is set and the VolumeSnapshot object is deleted without deleting the corresponding VolumeSnapshotContent object, the content remains. The snapshot itself is also retained in the storage back end.
  2. Delete the volume snapshot by entering the following command:

    $ oc delete volumesnapshot <volumesnapshot_name>

    Example output

    volumesnapshot.snapshot.storage.k8s.io "mysnapshot" deleted

  3. If the deletion policy is set to Retain, delete the volume snapshot content by entering the following command:

    $ oc delete volumesnapshotcontent <volumesnapshotcontent_name>
  4. Optional: If the VolumeSnapshot object is not successfully deleted, enter the following command to remove any finalizers for the leftover resource so that the delete operation can continue:

    Important

    Only remove the finalizers if you are confident that there are no existing references from either persistent volume claims or volume snapshot contents to the VolumeSnapshot object. Even with the --force option, the delete operation does not delete snapshot objects until all finalizers are removed.

    $ oc patch -n $PROJECT volumesnapshot/$NAME --type=merge -p '{"metadata": {"finalizers":null}}'

    Example output

    volumesnapshotclass.snapshot.storage.k8s.io "csi-ocs-rbd-snapclass" deleted

    The finalizers are removed and the volume snapshot is deleted.

5.4.7. Restoring a volume snapshot

The VolumeSnapshot CRD content can be used to restore the existing volume to a previous state.

After your VolumeSnapshot CRD is bound and the readyToUse value is set to true, you can use that resource to provision a new volume that is pre-populated with data from the snapshot. .Prerequisites * Logged in to a running OpenShift Container Platform cluster. * A persistent volume claim (PVC) created using a Container Storage Interface (CSI) driver that supports volume snapshots. * A storage class to provision the storage back end. * A volume snapshot has been created and is ready to use.

Procedure

  1. Specify a VolumeSnapshot data source on a PVC as shown in the following:

    pvc-restore.yaml

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: myclaim-restore
    spec:
      storageClassName: csi-hostpath-sc
      dataSource:
        name: mysnap 1
        kind: VolumeSnapshot 2
        apiGroup: snapshot.storage.k8s.io 3
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi

    1
    Name of the VolumeSnapshot object representing the snapshot to use as source.
    2
    Must be set to the VolumeSnapshot value.
    3
    Must be set to the snapshot.storage.k8s.io value.
  2. Create a PVC by entering the following command:

    $ oc create -f pvc-restore.yaml
  3. Verify that the restored PVC has been created by entering the following command:

    $ oc get pvc

    A new PVC such as myclaim-restore is displayed.

5.5. CSI volume cloning

Volume cloning duplicates an existing persistent volume to help protect against data loss in OpenShift Container Platform. This feature is only available with supported Container Storage Interface (CSI) drivers. You should be familiar with persistent volumes before you provision a CSI volume clone.

5.5.1. Overview of CSI volume cloning

A Container Storage Interface (CSI) volume clone is a duplicate of an existing persistent volume at a particular point in time.

Volume cloning is similar to volume snapshots, although it is more efficient. For example, a cluster administrator can duplicate a cluster volume by creating another instance of the existing cluster volume.

Cloning creates an exact duplicate of the specified volume on the back-end device, rather than creating a new empty volume. After dynamic provisioning, you can use a volume clone just as you would use any standard volume.

No new API objects are required for cloning. The existing dataSource field in the PersistentVolumeClaim object is expanded so that it can accept the name of an existing PersistentVolumeClaim in the same namespace.

5.5.1.1. Support limitations

By default, OpenShift Container Platform supports CSI volume cloning with these limitations:

  • The destination persistent volume claim (PVC) must exist in the same namespace as the source PVC.
  • Cloning is supported with a different Storage Class.

    • Destination volume can be the same for a different storage class as the source.
    • You can use the default storage class and omit storageClassName in the spec.
  • Support is only available for CSI drivers. In-tree and FlexVolumes are not supported.
  • CSI drivers might not have implemented the volume cloning functionality. For details, see the CSI driver documentation.

5.5.2. Provisioning a CSI volume clone

When you create a cloned persistent volume claim (PVC) API object, you trigger the provisioning of a CSI volume clone. The clone pre-populates with the contents of another PVC, adhering to the same rules as any other persistent volume. The one exception is that you must add a dataSource that references an existing PVC in the same namespace.

Prerequisites

  • You are logged in to a running OpenShift Container Platform cluster.
  • Your PVC is created using a CSI driver that supports volume cloning.
  • Your storage back end is configured for dynamic provisioning. Cloning support is not available for static provisioners.

Procedure

To clone a PVC from an existing PVC:

  1. Create and save a file with the PersistentVolumeClaim object described by the following YAML:

    pvc-clone.yaml

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-1-clone
      namespace: mynamespace
    spec:
      storageClassName: csi-cloning 1
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
      dataSource:
        kind: PersistentVolumeClaim
        name: pvc-1

    1
    The name of the storage class that provisions the storage back end. The default storage class can be used and storageClassName can be omitted in the spec.
  2. Create the object you saved in the previous step by running the following command:

    $ oc create -f pvc-clone.yaml

    A new PVC pvc-1-clone is created.

  3. Verify that the volume clone was created and is ready by running the following command:

    $ oc get pvc pvc-1-clone

    The pvc-1-clone shows that it is Bound.

    You are now ready to use the newly cloned PVC to configure a pod.

  4. Create and save a file with the Pod object described by the YAML. For example:

    kind: Pod
    apiVersion: v1
    metadata:
      name: mypod
    spec:
      containers:
        - name: myfrontend
          image: dockerfile/nginx
          volumeMounts:
          - mountPath: "/var/www/html"
            name: mypd
      volumes:
        - name: mypd
          persistentVolumeClaim:
            claimName: pvc-1-clone 1
    1
    The cloned PVC created during the CSI volume cloning operation.

    The created Pod object is now ready to consume, clone, snapshot, or delete your cloned PVC independently of its original dataSource PVC.

5.6. CSI automatic migration

In-tree storage drivers that are traditionally shipped with OpenShift Container Platform are being deprecated and replaced by their equivalent Container Storage Interface (CSI) drivers. OpenShift Container Platform provides automatic migration for certain supported in-tree volume plugins to their equivalent CSI drivers.

5.6.1. Overview

Volumes that are provisioned by using in-tree storage plugins, and that are supported by this feature, are migrated to their counterpart Container Storage Interface (CSI) drivers. This process does not perform any data migration; OpenShift Container Platform only translates the persistent volume object in memory. As a result, the translated persistent volume object is not stored on disk, nor is its contents changed.

The following in-tree to CSI drivers are supported:

Table 5.2. CSI automatic migration feature supported in-tree/CSI drivers
In-tree/CSI driversSupport levelCSI auto migration enabled automatically?
  • Azure Disk
  • OpenStack Cinder

Generally available (GA)

 Yes. For more information, see "Automatic migration of in-tree volumes to CSI".

  • Amazon Web Services (AWS) Elastic Block Storage (EBS)
  • Azure File
  • Google Compute Engine Persistent Disk (in-tree) and Google Cloud Platform Persistent Disk (CSI)
  • VMware vSphere

Technology Preview (TP)

 No. To enable, see "Manually enabling CSI automatic migration".

CSI automatic migration should be seamless. This feature does not change how you use all existing API objects: for example, PersistentVolumes, PersistentVolumeClaims, and StorageClasses.

Enabling CSI automatic migration for in-tree persistent volumes (PVs) or persistent volume claims (PVCs) does not enable any new CSI driver features, such as snapshots or expansion, if the original in-tree storage plugin did not support it.

5.6.2. Automatic migration of in-tree volumes to CSI

OpenShift Container Platform supports automatic and seamless migration for the following in-tree volume types to their Container Storage Interface (CSI) driver counterpart:

  • Azure Disk
  • OpenStack Cinder

CSI migration for these volume types is considered generally available (GA), and requires no manual intervention.

For new OpenShift Container Platform 4.11, and later, installations, the default storage class is the CSI storage class. All volumes provisioned using this storage class are CSI persistent volumes (PVs).

For clusters upgraded from 4.10, and earlier, to 4.11, and later, the CSI storage class is created, and is set as the default if no default storage class was set prior to the upgrade. In the very unlikely case that there is a storage class with the same name, the existing storage class remains unchanged. Any existing in-tree storage classes remain, and might be necessary for certain features, such as volume expansion to work for existing in-tree PVs. While storage class referencing to the in-tree storage plugin will continue working, we recommend that you switch the default storage class to the CSI storage class.

5.6.3. Manually enabling CSI automatic migration

If you want to test Container Storage Interface (CSI) migration in development or staging OpenShift Container Platform clusters, you must manually enable in-tree to CSI migration for the following in-tree volume types:

  • AWS Elastic Block Storage (EBS)
  • Google Compute Engine Persistent Disk (GCE-PD)
  • VMware vSphere Disk
  • Azure File
Important

CSI automatic migration for the preceding in-tree volume plugins and CSI driver pairs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

After migration, the default storage class remains the in-tree storage class.

CSI automatic migration will be enabled by default for all storage in-tree plugins in a future OpenShift Container Platform release, so it is highly recommended that you test it now and report any issues.

Note

Enabling CSI automatic migration drains, and then restarts, all nodes in the cluster in sequence. This might take some time.

Procedure

  • Enable feature gates (see Nodes Working with clusters Enabling features using feature gates).

    Important

    After turning on Technology Preview features using feature gates, they cannot be turned off. As a result, cluster upgrades are prevented.

    The following configuration example enables CSI automatic migration for all CSI drivers supported by this feature that are currently in Technology Preview (TP) status:

    apiVersion: config.openshift.io/v1
    kind: FeatureGate
    metadata:
      name: cluster
    spec:
      featureSet: TechPreviewNoUpgrade 1
    ...
    1
    Enables automatic migration for AWS EBS, GCP, Azure File, and VMware vSphere.

    You can specify CSI automatic migration for a selected CSI driver by setting CustomNoUpgrade featureSet and for featuregates to one of the following:

    • CSIMigrationAWS
    • CSIMigrationAzureFile
    • CSIMigrationGCE
    • CSIMigrationvSphere

    The following configuration example enables automatic migration to the AWS EBS CSI driver only:

    apiVersion: config.openshift.io/v1
    kind: FeatureGate
    metadata:
      name: cluster
    spec:
      featureSet: CustomNoUpgrade
      customNoUpgrade:
        enabled:
          - CSIMigrationAWS 1
        ...
    1
    Enables automatic migration for AWS EBS only.

5.7. AliCloud Disk CSI Driver Operator

5.7.1. Overview

OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Alibaba AliCloud Disk Storage.

Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.

To create CSI-provisioned PVs that mount to AliCloud Disk storage assets, OpenShift Container Platform installs the AliCloud Disk CSI Driver Operator and the AliCloud Disk CSI driver, by default, in the openshift-cluster-csi-drivers namespace.

  • The AliCloud Disk CSI Driver Operator provides a storage class (alicloud-disk) that you can use to create persistent volume claims (PVCs). The AliCloud Disk CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on demand, eliminating the need for cluster administrators to pre-provision storage.
  • The AliCloud Disk CSI driver enables you to create and mount AliCloud Disk PVs.

5.7.2. About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.

Additional resources

5.8. AWS Elastic Block Store CSI Driver Operator

5.8.1. Overview

OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AWS Elastic Block Store (EBS).

Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver.

To create CSI-provisioned PVs that mount to AWS EBS storage assets, OpenShift Container Platform installs the AWS EBS CSI Driver Operator and the AWS EBS CSI driver by default in the openshift-cluster-csi-drivers namespace.

  • The AWS EBS CSI Driver Operator provides a StorageClass by default that you can use to create PVCs. You also have the option to create the AWS EBS StorageClass as described in Persistent storage using AWS Elastic Block Store.
  • The AWS EBS CSI driver enables you to create and mount AWS EBS PVs.
Note

If you installed the AWS EBS CSI Operator and driver on an OpenShift Container Platform 4.5 cluster, you must uninstall the 4.5 Operator and driver before you update to OpenShift Container Platform 4.11.

5.8.2. About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.

Important

OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision AWS EBS storage.

In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.

After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform.

For information about dynamically provisioning AWS EBS persistent volumes in OpenShift Container Platform, see Persistent storage using AWS Elastic Block Store.

5.9. AWS Elastic File Service CSI Driver Operator

5.9.1. Overview

OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AWS Elastic File Service (EFS).

Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.

After installing the AWS EFS CSI Driver Operator, OpenShift Container Platform installs the AWS EFS CSI Operator and the AWS EFS CSI driver by default in the openshift-cluster-csi-drivers namespace. This allows the AWS EFS CSI Driver Operator to create CSI-provisioned PVs that mount to AWS EFS assets.

  • The AWS EFS CSI Driver Operator, after being installed, does not create a storage class by default to use to create persistent volume claims (PVCs). However, you can manually create the AWS EFS StorageClass. The AWS EFS CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand. This eliminates the need for cluster administrators to pre-provision storage.
  • The AWS EFS CSI driver enables you to create and mount AWS EFS PVs.
Note

AWS EFS only supports regional volumes, not zonal volumes.

5.9.2. About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.

5.9.3. Installing the AWS EFS CSI Driver Operator

The AWS EFS CSI Driver Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install and configure the AWS EFS CSI Driver Operator in your cluster.

Prerequisites

  • Access to the OpenShift Container Platform web console.

Procedure

To install the AWS EFS CSI Driver Operator from the web console:

  1. Log in to the web console.
  2. Install the AWS EFS CSI Operator:

    1. Click Operators OperatorHub.
    2. Locate the AWS EFS CSI Operator by typing AWS EFS CSI in the filter box.
    3. Click the AWS EFS CSI Driver Operator button.

      Important

      Be sure to select the AWS EFS CSI Driver Operator and not the AWS EFS Operator. The AWS EFS Operator is a community Operator and is not supported by Red Hat.

    4. On the AWS EFS CSI Driver Operator page, click Install.
    5. On the Install Operator page, ensure that:

      • All namespaces on the cluster (default) is selected.
      • Installed Namespace is set to openshift-cluster-csi-drivers.
    6. Click Install.

      After the installation finishes, the AWS EFS CSI Operator is listed in the Installed Operators section of the web console.

  3. If you are using AWS EFS with AWS Security Token Service (STS), you must configure the AWS EFS CSI Driver with STS. For more information, see "Configuring AWS EFS CSI Driver with STS".
  4. Install the AWS EFS CSI Driver:

    1. Click administration CustomResourceDefinitions ClusterCSIDriver.
    2. On the Instances tab, click Create ClusterCSIDriver.
    3. Use the following YAML file:

      apiVersion: operator.openshift.io/v1
      kind: ClusterCSIDriver
      metadata:
          name: efs.csi.aws.com
      spec:
        managementState: Managed
    4. Click Create.
    5. Wait for the following Conditions to change to a "true" status:

      • AWSEFSDriverCredentialsRequestControllerAvailable
      • AWSEFSDriverNodeServiceControllerAvailable
      • AWSEFSDriverControllerServiceControllerAvailable

5.9.4. Configuring AWS EFS CSI Driver Operator with Security Token Service

This procedure explains how to configure the AWS EFS CSI Driver Operator with OpenShift Container Platform on AWS Security Token Service (STS).

Perform this procedure after installing the AWS EFS CSI Operator, but before installing the AWS EFS CSI driver as part of Installing the AWS EFS CSI Driver Operator procedure. If you perform this procedure after installing the driver and creating volumes, your volumes will fail to mount into pods.

Prerequisites

  • AWS account credentials

Procedure

To configure the AWS EFS CSI Driver Operator with STS:

  1. Extract the CCO utility (ccoctl) binary from the OpenShift Container Platform release image, which you used to install the cluster with STS. For more information, see "Configuring the Cloud Credential Operator utility".
  2. Create and save an EFS CredentialsRequest YAML file, such as shown in the following example, and then place it in the credrequests directory:

    Example

    apiVersion: cloudcredential.openshift.io/v1
    kind: CredentialsRequest
    metadata:
      name: openshift-aws-efs-csi-driver
      namespace: openshift-cloud-credential-operator
    spec:
      providerSpec:
        apiVersion: cloudcredential.openshift.io/v1
        kind: AWSProviderSpec
        statementEntries:
        - action:
          - elasticfilesystem:*
          effect: Allow
          resource: '*'
      secretRef:
        name: aws-efs-cloud-credentials
        namespace: openshift-cluster-csi-drivers
      serviceAccountNames:
      - aws-efs-csi-driver-operator
      - aws-efs-csi-driver-controller-sa

  3. Run the ccoctl tool to generate a new IAM role in AWS, and create a YAML file for it in the local file system (<path_to_ccoctl_output_dir>/manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml).

    $ ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com
    • name=<name> is the name used to tag any cloud resources that are created for tracking.
    • region=<aws_region> is the AWS region where cloud resources are created.
    • dir=<path_to_directory_with_list_of_credentials_requests>/credrequests is the directory containing the EFS CredentialsRequest file in previous step.
    • <aws_account_id> is the AWS account ID.

      Example

      $ ccoctl aws create-iam-roles --name my-aws-efs --credentials-requests-dir credrequests --identity-provider-arn arn:aws:iam::123456789012:oidc-provider/my-aws-efs-oidc.s3.us-east-2.amazonaws.com

      Example output

      2022/03/21 06:24:44 Role arn:aws:iam::123456789012:role/my-aws-efs -openshift-cluster-csi-drivers-aws-efs-cloud- created
      2022/03/21 06:24:44 Saved credentials configuration to: /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml
      2022/03/21 06:24:45 Updated Role policy for Role my-aws-efs-openshift-cluster-csi-drivers-aws-efs-cloud-

  4. Create the AWS EFS cloud credentials and secret:

    $ oc create -f <path_to_ccoctl_output_dir>/manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml

    Example

    $ oc create -f /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml

    Example output

    secret/aws-efs-cloud-credentials created

5.9.5. Creating the AWS EFS storage class

Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.

The AWS EFS CSI Driver Operator, after being installed, does not create a storage class by default. However, you can manually create the AWS EFS storage class.

5.9.5.1. Creating the AWS EFS storage class using the console

Procedure

  1. In the OpenShift Container Platform console, click Storage StorageClasses.
  2. On the StorageClasses page, click Create StorageClass.
  3. On the StorageClass page, perform the following steps:

    1. Enter a name to reference the storage class.
    2. Optional: Enter the description.
    3. Select the reclaim policy.
    4. Select efs.csi.aws.com from the Provisioner drop-down list.
    5. Optional: Set the configuration parameters for the selected provisioner.
  4. Click Create.

5.9.5.2. Creating the AWS EFS storage class using the CLI

Procedure

  • Create a StorageClass object:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: efs-sc
    provisioner: efs.csi.aws.com
    parameters:
      provisioningMode: efs-ap 1
      fileSystemId: fs-a5324911 2
      directoryPerms: "700" 3
      gidRangeStart: "1000" 4
      gidRangeEnd: "2000" 5
      basePath: "/dynamic_provisioning" 6
    1
    provisioningMode must be efs-ap to enable dynamic provisioning.
    2
    fileSystemId must be the ID of the EFS volume created manually.
    3
    directoryPerms is the default permission of the root directory of the volume. In this example, the volume is accessible only by the owner.
    4 5
    gidRangeStart and gidRangeEnd set the range of POSIX Group IDs (GIDs) that are used to set the GID of the AWS access point. If not specified, the default range is 50000-7000000. Each provisioned volume, and thus AWS access point, is assigned a unique GID from this range.
    6
    basePath is the directory on the EFS volume that is used to create dynamically provisioned volumes. In this case, a PV is provisioned as “/dynamic_provisioning/<random uuid>” on the EFS volume. Only the subdirectory is mounted to pods that use the PV.
    Note

    A cluster admin can create several StorageClass objects, each using a different EFS volume.

5.9.6. Creating and configuring access to EFS volumes in AWS

This procedure explains how to create and configure EFS volumes in AWS so that you can use them in OpenShift Container Platform.

Prerequisites

  • AWS account credentials

Procedure

To create and configure access to an EFS volume in AWS:

  1. On the AWS console, open https://console.aws.amazon.com/efs.
  2. Click Create file system:

    • Enter a name for the file system.
    • For Virtual Private Cloud (VPC), select your OpenShift Container Platform’s' virtual private cloud (VPC).
    • Accept default settings for all other selections.
  3. Wait for the volume and mount targets to finish being fully created:

    1. Go to https://console.aws.amazon.com/efs#/file-systems.
    2. Click your volume, and on the Network tab wait for all mount targets to become available (~1-2 minutes).
  4. On the Network tab, copy the Security Group ID (you will need this in the next step).
  5. Go to https://console.aws.amazon.com/ec2/v2/home#SecurityGroups, and find the Security Group used by the EFS volume.
  6. On the Inbound rules tab, click Edit inbound rules, and then add a new rule with the following settings to allow OpenShift Container Platform nodes to access EFS volumes :

    • Type: NFS
    • Protocol: TCP
    • Port range: 2049
    • Source: Custom/IP address range of your nodes (for example: “10.0.0.0/16”)

      This step allows OpenShift Container Platform to use NFS ports from the cluster.

  7. Save the rule.

5.9.7. Dynamic provisioning for AWS EFS

The AWS EFS CSI Driver supports a different form of dynamic provisioning than other CSI drivers. It provisions new PVs as subdirectories of a pre-existing EFS volume. The PVs are independent of each other. However, they all share the same EFS volume. When the volume is deleted, all PVs provisioned out of it are deleted too. The EFS CSI driver creates an AWS Access Point for each such subdirectory. Due to AWS AccessPoint limits, you can only dynamically provision 1000 PVs from a single StorageClass/EFS volume.

Important

Note that PVC.spec.resources is not enforced by EFS.

In the example below, you request 5 GiB of space. However, the created PV is limitless and can store any amount of data (like petabytes). A broken application, or even a rogue application, can cause significant expenses when it stores too much data on the volume.

Using monitoring of EFS volume sizes in AWS is strongly recommended.

Prerequisites

  • You have created AWS EFS volumes.
  • You have created the AWS EFS storage class.

Procedure

To enable dynamic provisioning:

  • Create a PVC (or StatefulSet or Template) as usual, referring to the StorageClass created above.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: test
    spec:
      storageClassName: efs-sc
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 5Gi

If you have problems setting up dynamic provisioning, see AWS EFS troubleshooting.

5.9.8. Creating static PVs with AWS EFS

It is possible to use an AWS EFS volume as a single PV without any dynamic provisioning. The whole volume is mounted to pods.

Prerequisites

  • You have created AWS EFS volumes.

Procedure

  • Create the PV using the following YAML file:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: efs-pv
    spec:
      capacity: 1
        storage: 5Gi
      volumeMode: Filesystem
      accessModes:
        - ReadWriteMany
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: efs.csi.aws.com
        volumeHandle: fs-ae66151a 2
        volumeAttributes:
          encryptInTransit: "false" 3
    1
    spec.capacity does not have any meaning and is ignored by the CSI driver. It is used only when binding to a PVC. Applications can store any amount of data to the volume.
    2
    volumeHandle must be the same ID as the EFS volume you created in AWS. If you are providing your own access point, volumeHandle should be <EFS volume ID>::<access point ID>. For example: fs-6e633ada::fsap-081a1d293f0004630.
    3
    If desired, you can disable encryption in transit. Encryption is enabled by default.

If you have problems setting up static PVs, see AWS EFS troubleshooting.

5.9.9. AWS EFS security

The following information is important for AWS EFS security.

When using access points, for example, by using dynamic provisioning as described earlier, Amazon automatically replaces GIDs on files with the GID of the access point. In addition, EFS considers the user ID, group ID, and secondary group IDs of the access point when evaluating file system permissions. EFS ignores the NFS client’s IDs. For more information about access points, see https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html.

As a consequence, EFS volumes silently ignore FSGroup; OpenShift Container Platform is not able to replace the GIDs of files on the volume with FSGroup. Any pod that can access a mounted EFS access point can access any file on it.

Unrelated to this, encryption in transit is enabled by default. For more information, see https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html.

5.9.10. AWS EFS troubleshooting

The following information provides guidance on how to troubleshoot issues with AWS EFS:

  • The AWS EFS Operator and CSI driver run in namespace openshift-cluster-csi-drivers.
  • To initiate gathering of logs of the AWS EFS Operator and CSI driver, run the following command:

    $ oc adm must-gather
    [must-gather      ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5
    [must-gather      ] OUT namespace/openshift-must-gather-xm4wq created
    [must-gather      ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-2bd8x created
    [must-gather      ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 created
  • To show AWS EFS Operator errors, view the ClusterCSIDriver status:

    $ oc get clustercsidriver efs.csi.aws.com -o yaml
  • If a volume cannot be mounted to a pod (as shown in the output of the following command):

    $ oc describe pod
    ...
      Type     Reason       Age    From               Message
      ----     ------       ----   ----               -------
      Normal   Scheduled    2m13s  default-scheduler  Successfully assigned default/efs-app to ip-10-0-135-94.ec2.internal
      Warning  FailedMount  13s    kubelet            MountVolume.SetUp failed for volume "pvc-d7c097e6-67ec-4fae-b968-7e7056796449" : rpc error: code = DeadlineExceeded desc = context deadline exceeded 1
      Warning  FailedMount  10s    kubelet            Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-9j477]: timed out waiting for the condition
    1
    Warning message indicating volume not mounted.

    This error is frequently caused by AWS dropping packets between an OpenShift Container Platform node and AWS EFS.

    Check that the following are correct:

    • AWS firewall and Security Groups
    • Networking: port number and IP addresses

5.9.11. Uninstalling the AWS EFS CSI Driver Operator

All EFS PVs are inaccessible after uninstalling the AWS EFS CSI Driver Operator.

Prerequisites

  • Access to the OpenShift Container Platform web console.

Procedure

To uninstall the AWS EFS CSI Driver Operator from the web console:

  1. Log in to the web console.
  2. Stop all applications that use AWS EFS PVs.
  3. Delete all AWS EFS PVs:

    1. Click Storage PersistentVolumeClaims.
    2. Select each PVC that is in use by the AWS EFS CSI Driver Operator, click the drop-down menu on the far right of the PVC, and then click Delete PersistentVolumeClaims.
  4. Uninstall the AWS EFS CSI Driver:

    Note

    Before you can uninstall the Operator, you must remove the CSI driver first.

    1. Click administration CustomResourceDefinitions ClusterCSIDriver.
    2. On the Instances tab, for efs.csi.aws.com, on the far left side, click the drop-down menu, and then click Delete ClusterCSIDriver.
    3. When prompted, click Delete.
  5. Uninstall the AWS EFS CSI Operator:

    1. Click Operators Installed Operators.
    2. On the Installed Operators page, scroll or type AWS EFS CSI into the Search by name box to find the Operator, and then click it.
    3. On the upper, right of the Installed Operators > Operator details page, click Actions Uninstall Operator.
    4. When prompted on the Uninstall Operator window, click the Uninstall button to remove the Operator from the namespace. Any applications deployed by the Operator on the cluster need to be cleaned up manually.

      After uninstalling, the AWS EFS CSI Driver Operator is no longer listed in the Installed Operators section of the web console.

Note

Before you can destroy a cluster (openshift-install destroy cluster), you must delete the EFS volume in AWS. An OpenShift Container Platform cluster cannot be destroyed when there is an EFS volume that uses the cluster’s VPC. Amazon does not allow deletion of such a VPC.

5.9.12. Additional resources

5.10. Azure Disk CSI Driver Operator

5.10.1. Overview

OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Microsoft Azure Disk Storage.

Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.

To create CSI-provisioned PVs that mount to Azure Disk storage assets, OpenShift Container Platform installs the Azure Disk CSI Driver Operator and the Azure Disk CSI driver by default in the openshift-cluster-csi-drivers namespace.

  • The Azure Disk CSI Driver Operator provides a storage class named managed-csi that you can use to create persistent volume claims (PVCs). The Azure Disk CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage.
  • The Azure Disk CSI driver enables you to create and mount Azure Disk PVs.

5.10.2. About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.

Important

OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision Azure Disk storage.

In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.

After full migration, in-tree plugins will eventually be removed in later versions of OpenShift Container Platform.

5.10.3. Creating a storage class with storage account type

Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, you can obtain dynamically provisioned persistent volumes.

When creating a storage class, you can designate the storage account type. This corresponds to your Azure storage account SKU tier. Valid options are Standard_LRS, Premium_LRS, StandardSSD_LRS, UltraSSD_LRS, Premium_ZRS, and StandardSSD_ZRS. For information about finding your Azure SKU tier, see SKU Types.

ZRS has some region limitations. For information about these limitations, see ZRS limitations.

Prerequisites

  • Access to an OpenShift Container Platform cluster with administrator rights

Procedure

Use the following steps to create a storage class with a storage account type.

  1. Create a storage class designating the storage account type using a YAML file similar to the following:

    $ oc create -f - << EOF
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: <storage-class> 1
    provisioner: disk.csi.azure.com
    parameters:
      skuName: <storage-class-account-type> 2
    reclaimPolicy: Delete
    volumeBindingMode: WaitForFirstConsumer
    allowVolumeExpansion: true
    EOF
    1
    Storage class name.
    2
    Storage account type. This corresponds to your Azure storage account SKU tier:`Standard_LRS`, Premium_LRS, StandardSSD_LRS, UltraSSD_LRS, Premium_ZRS, StandardSSD_ZRS.
  2. Ensure that the storage class was created by listing the storage classes:

    $ oc get storageclass

    Example output

    $ oc get storageclass
    NAME                    PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    azurefile-csi           file.csi.azure.com   Delete          Immediate              true                   68m
    managed-csi (default)   disk.csi.azure.com   Delete          WaitForFirstConsumer   true                   68m
    sc-prem-zrs             disk.csi.azure.com   Delete          WaitForFirstConsumer   true                   4m25s 1

    1
    New storage class with storage account type.

5.10.4. Machine sets that deploy machines with ultra disks using PVCs

You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads.

Both the in-tree plugin and CSI driver support using PVCs to enable ultra disks. You can also deploy machines with ultra disks as data disks without creating a PVC.

5.10.4.1. Creating machines with ultra disks by using machine sets

You can deploy machines with ultra disks on Azure by editing your machine set YAML file.

Prerequisites

  • Have an existing Microsoft Azure cluster.

Procedure

  1. Copy an existing Azure MachineSet custom resource (CR) and edit it by running the following command:

    $ oc edit machineset <machine-set-name>

    where <machine-set-name> is the machine set that you want to provision machines with ultra disks.

  2. Add the following lines in the positions indicated:

    apiVersion: machine.openshift.io/v1beta1
    kind: MachineSet
      ...
    spec:
      ...
      template:
        ...
        spec:
          metadata:
            ...
            labels:
              ...
              disk: ultrassd 1
              ...
          providerSpec:
            value:
              ...
              ultraSSDCapability: Enabled 2
                 ...
    1
    Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value.
    2
    These lines enable the use of ultra disks.
  3. Create a machine set using the updated configuration by running the following command:

    $ oc create -f <machine-set-name>.yaml
  4. Create a storage class that contains the following YAML definition:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: ultra-disk-sc 1
    parameters:
      cachingMode: None
      diskIopsReadWrite: "2000" 2
      diskMbpsReadWrite: "320" 3
      kind: managed
      skuname: UltraSSD_LRS
    provisioner: disk.csi.azure.com 4
    reclaimPolicy: Delete
    volumeBindingMode: WaitForFirstConsumer 5
    1
    Specify the name of the storage class. This procedure uses ultra-disk-sc for this value.
    2
    Specify the number of IOPS for the storage class.
    3
    Specify the throughput in MBps for the storage class.
    4
    For Azure Kubernetes Service (AKS) version 1.21 or later, use disk.csi.azure.com. For earlier versions of AKS, use kubernetes.io/azure-disk.
    5
    Optional: Specify this parameter to wait for the creation of the pod that will use the disk.
  5. Create a persistent volume claim (PVC) to reference the ultra-disk-sc storage class that contains the following YAML definition:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: ultra-disk 1
    spec:
      accessModes:
      - ReadWriteOnce
      storageClassName: ultra-disk-sc 2
      resources:
        requests:
          storage: 4Gi 3
    1
    Specify the name of the PVC. This procedure uses ultra-disk for this value.
    2
    This PVC references the ultra-disk-sc storage class.
    3
    Specify the size for the storage class. The minimum value is 4Gi.
  6. Create a pod that contains the following YAML definition:

    apiVersion: v1
    kind: Pod
    metadata:
      name: nginx-ultra
    spec:
      nodeSelector:
        disk: ultrassd 1
      containers:
      - name: nginx-ultra
        image: alpine:latest
        command:
          - "sleep"
          - "infinity"
        volumeMounts:
        - mountPath: "/mnt/azure"
          name: volume
      volumes:
        - name: volume
          persistentVolumeClaim:
            claimName: ultra-disk 2
    1
    Specify the label of the machine set that enables the use of ultra disks. This procedure uses disk.ultrassd for this value.
    2
    This pod references the ultra-disk PVC.

Verification

  1. Validate that the machines are created by running the following command:

    $ oc get machines

    The machines should be in the Running state.

  2. For a machine that is running and has a node attached, validate the partition by running the following command:

    $ oc debug node/<node-name> -- chroot /host lsblk

    In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with --. The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine.

Next steps

  • To use an ultra disk from within a pod, create workload that uses the mount point. Create a YAML file similar to the following example:

    apiVersion: v1
    kind: Pod
    metadata:
      name: ssd-benchmark1
    spec:
      containers:
      - name: ssd-benchmark1
        image: nginx
        ports:
          - containerPort: 80
            name: "http-server"
        volumeMounts:
        - name: lun0p1
          mountPath: "/tmp"
      volumes:
        - name: lun0p1
          hostPath:
            path: /var/lib/lun0p1
            type: DirectoryOrCreate
      nodeSelector:
        disktype: ultrassd

5.10.4.2. Troubleshooting resources for machine sets that enable ultra disks

Use the information in this section to understand and recover from issues you might encounter.

5.10.4.2.1. Unable to mount a persistent volume claim backed by an ultra disk

If there is an issue mounting a persistent volume claim backed by an ultra disk, the pod becomes stuck in the ContainerCreating state and an alert is triggered.

For example, if the additionalCapabilities.ultraSSDEnabled parameter is not set on the machine that backs the node that hosts the pod, the following error message appears:

StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.
  • To resolve this issue, describe the pod by running the following command:

    $ oc -n <stuck_pod_namespace> describe pod <stuck_pod_name>

5.10.5. Additional resources

5.11. Azure File CSI Driver Operator

5.11.1. Overview

OpenShift Container Platform is capable of provisioning persistent volumes (PVs) by using the Container Storage Interface (CSI) driver for Microsoft Azure File Storage.

Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.

To create CSI-provisioned PVs that mount to Azure File storage assets, OpenShift Container Platform installs the Azure File CSI Driver Operator and the Azure File CSI driver by default in the openshift-cluster-csi-drivers namespace.

  • The Azure File CSI Driver Operator provides a storage class that is named azurefile-csi that you can use to create persistent volume claims (PVCs).
  • The Azure File CSI driver enables you to create and mount Azure File PVs. The Azure File CSI driver supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage.

Azure File CSI Driver Operator does not support:

  • Virtual hard disks (VHD)
  • Network File System (NFS): OpenShift Container Platform does not deploy a NFS-backed storage class.
  • Running on nodes with FIPS mode enabled.

For more information about supported features, see Supported CSI drivers and features.

5.11.2. About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.

5.12. Azure Stack Hub CSI Driver Operator

5.12.1. Overview

OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Azure Stack Hub Storage. Azure Stack Hub, which is part of the Azure Stack portfolio, allows you to run apps in an on-premises environment and deliver Azure services in your datacenter.

Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.

To create CSI-provisioned PVs that mount to Azure Stack Hub storage assets, OpenShift Container Platform installs the Azure Stack Hub CSI Driver Operator and the Azure Stack Hub CSI driver by default in the openshift-cluster-csi-drivers namespace.

  • The Azure Stack Hub CSI Driver Operator provides a storage class (managed-csi), with "Standard_LRS" as the default storage account type, that you can use to create persistent volume claims (PVCs). The Azure Stack Hub CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage.
  • The Azure Stack Hub CSI driver enables you to create and mount Azure Stack Hub PVs.

5.12.2. About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.

5.12.3. Additional resources

5.13. GCP PD CSI Driver Operator

5.13.1. Overview

OpenShift Container Platform can provision persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Google Cloud Platform (GCP) persistent disk (PD) storage.

Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver.

To create CSI-provisioned persistent volumes (PVs) that mount to GCP PD storage assets, OpenShift Container Platform installs the GCP PD CSI Driver Operator and the GCP PD CSI driver by default in the openshift-cluster-csi-drivers namespace.

  • GCP PD CSI Driver Operator: By default, the Operator provides a storage class that you can use to create PVCs. You also have the option to create the GCP PD storage class as described in Persistent storage using GCE Persistent Disk.
  • GCP PD driver: The driver enables you to create and mount GCP PD PVs.
Important

OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision GCP PD storage.

In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.

After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform.

5.13.2. About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.

5.13.3. GCP PD CSI driver storage class parameters

The Google Cloud Platform (GCP) persistent disk (PD) Container Storage Interface (CSI) driver uses the CSI external-provisioner sidecar as a controller. This is a separate helper container that is deployed with the CSI driver. The sidecar manages persistent volumes (PVs) by triggering the CreateVolume operation.

The GCP PD CSI driver uses the csi.storage.k8s.io/fstype parameter key to support dynamic provisioning. The following table describes all the GCP PD CSI storage class parameters that are supported by OpenShift Container Platform.

Table 5.3. CreateVolume Parameters
ParameterValuesDefaultDescription

type

pd-ssd or pd-standard

pd-standard

Allows you to choose between standard PVs or solid-state-drive PVs.

replication-type

none or regional-pd

none

Allows you to choose between zonal or regional PVs.

disk-encryption-kms-key

Fully qualified resource identifier for the key to use to encrypt new disks.

Empty string

Uses customer-managed encryption keys (CMEK) to encrypt new disks.

5.13.4. Creating a custom-encrypted persistent volume

When you create a PersistentVolumeClaim object, OpenShift Container Platform provisions a new persistent volume (PV) and creates a PersistentVolume object. You can add a custom encryption key in Google Cloud Platform (GCP) to protect a PV in your cluster by encrypting the newly created PV.

For encryption, the newly attached PV that you create uses customer-managed encryption keys (CMEK) on a cluster by using a new or existing Google Cloud Key Management Service (KMS) key.

Prerequisites

  • You are logged in to a running OpenShift Container Platform cluster.
  • You have created a Cloud KMS key ring and key version.

For more information about CMEK and Cloud KMS resources, see Using customer-managed encryption keys (CMEK).

Procedure

To create a custom-encrypted PV, complete the following steps:

  1. Create a storage class with the Cloud KMS key. The following example enables dynamic provisioning of encrypted volumes:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: csi-gce-pd-cmek
    provisioner: pd.csi.storage.gke.io
    volumeBindingMode: "WaitForFirstConsumer"
    allowVolumeExpansion: true
    parameters:
      type: pd-standard
      disk-encryption-kms-key: projects/<key-project-id>/locations/<location>/keyRings/<key-ring>/cryptoKeys/<key> 1
    1
    This field must be the resource identifier for the key that will be used to encrypt new disks. Values are case-sensitive. For more information about providing key ID values, see Retrieving a resource’s ID and Getting a Cloud KMS resource ID.
    Note

    You cannot add the disk-encryption-kms-key parameter to an existing storage class. However, you can delete the storage class and recreate it with the same name and a different set of parameters. If you do this, the provisioner of the existing class must be pd.csi.storage.gke.io.

  2. Deploy the storage class on your OpenShift Container Platform cluster using the oc command:

    $ oc describe storageclass csi-gce-pd-cmek

    Example output

    Name:                  csi-gce-pd-cmek
    IsDefaultClass:        No
    Annotations:           None
    Provisioner:           pd.csi.storage.gke.io
    Parameters:            disk-encryption-kms-key=projects/key-project-id/locations/location/keyRings/ring-name/cryptoKeys/key-name,type=pd-standard
    AllowVolumeExpansion:  true
    MountOptions:          none
    ReclaimPolicy:         Delete
    VolumeBindingMode:     WaitForFirstConsumer
    Events:                none

  3. Create a file named pvc.yaml that matches the name of your storage class object that you created in the previous step:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: podpvc
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: csi-gce-pd-cmek
      resources:
        requests:
          storage: 6Gi
    Note

    If you marked the new storage class as default, you can omit the storageClassName field.

  4. Apply the PVC on your cluster:

    $ oc apply -f pvc.yaml
  5. Get the status of your PVC and verify that it is created and bound to a newly provisioned PV:

    $ oc get pvc

    Example output

    NAME      STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
    podpvc    Bound     pvc-e36abf50-84f3-11e8-8538-42010a800002   10Gi       RWO            csi-gce-pd-cmek  9s

    Note

    If your storage class has the volumeBindingMode field set to WaitForFirstConsumer, you must create a pod to use the PVC before you can verify it.

Your CMEK-protected PV is now ready to use with your OpenShift Container Platform cluster.

5.14. IBM VPC Block CSI Driver Operator

5.14.1. Overview

OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for IBM Virtual Private Cloud (VPC) Block Storage.

Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.

To create CSI-provisioned PVs that mount to IBM VPC Block storage assets, OpenShift Container Platform installs the IBM VPC Block CSI Driver Operator and the IBM VPC Block CSI driver by default in the openshift-cluster-csi-drivers namespace.

  • The IBM VPC Block CSI Driver Operator provides three storage classes named ibmc-vpc-block-10iops-tier (default), ibmc-vpc-block-5iops-tier, and ibmc-vpc-block-custom for different tiers that you can use to create persistent volume claims (PVCs). The IBM VPC Block CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on demand, eliminating the need for cluster administrators to pre-provision storage.
  • The IBM VPC Block CSI driver enables you to create and mount IBM VPC Block PVs.

5.14.2. About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.

Additional resources

5.15. OpenStack Cinder CSI Driver Operator

5.15.1. Overview

OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for OpenStack Cinder.

Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver.

To create CSI-provisioned PVs that mount to OpenStack Cinder storage assets, OpenShift Container Platform installs the OpenStack Cinder CSI Driver Operator and the OpenStack Cinder CSI driver in the openshift-cluster-csi-drivers namespace.

  • The OpenStack Cinder CSI Driver Operator provides a CSI storage class that you can use to create PVCs.
  • The OpenStack Cinder CSI driver enables you to create and mount OpenStack Cinder PVs.

For OpenShift Container Platform, automatic migration from OpenStack Cinder in-tree to the CSI driver is available as a Technology Preview (TP) feature. With migration enabled, volumes provisioned using the existing in-tree plugin are automatically migrated to use the OpenStack Cinder CSI driver. For more information, see CSI automatic migration feature.

5.15.2. About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.

Important

OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision Cinder storage.

In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.

After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform.

5.15.3. Making OpenStack Cinder CSI the default storage class

The OpenStack Cinder CSI driver uses the cinder.csi.openstack.org parameter key to support dynamic provisioning.

To enable OpenStack Cinder CSI provisioning in OpenShift Container Platform, it is recommended that you overwrite the default in-tree storage class with standard-csi. Alternatively, you can create the persistent volume claim (PVC) and specify the storage class as "standard-csi".

In OpenShift Container Platform, the default storage class references the in-tree Cinder driver. However, with CSI automatic migration enabled, volumes created using the default storage class actually use the CSI driver.

Procedure

Use the following steps to apply the standard-csi storage class by overwriting the default in-tree storage class.

  1. List the storage class:

    $ oc get storageclass

    Example output

    NAME                   PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    standard(default)      cinder.csi.openstack.org   Delete          WaitForFirstConsumer   true                   46h
    standard-csi           kubernetes.io/cinder       Delete          WaitForFirstConsumer   true                   46h

  2. Change the value of the annotation storageclass.kubernetes.io/is-default-class to false for the default storage class, as shown in the following example:

    $ oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
  3. Make another storage class the default by adding or modifying the annotation as storageclass.kubernetes.io/is-default-class=true.

    $ oc patch storageclass standard-csi -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
  4. Verify that the PVC is now referencing the CSI storage class by default:

    $ oc get storageclass

    Example output

    NAME                   PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    standard               kubernetes.io/cinder       Delete          WaitForFirstConsumer   true                   46h
    standard-csi(default)  cinder.csi.openstack.org   Delete          WaitForFirstConsumer   true                   46h

  5. Optional: You can define a new PVC without having to specify the storage class:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: cinder-claim
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi

    A PVC that does not specify a specific storage class is automatically provisioned by using the default storage class.

  6. Optional: After the new file has been configured, create it in your cluster:

    $ oc create -f cinder-claim.yaml

Additional resources

5.16. OpenStack Manila CSI Driver Operator

5.16.1. Overview

OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for the OpenStack Manila shared file system service.

Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver.

To create CSI-provisioned PVs that mount to Manila storage assets, OpenShift Container Platform installs the Manila CSI Driver Operator and the Manila CSI driver by default on any OpenStack cluster that has the Manila service enabled.

  • The Manila CSI Driver Operator creates the required storage class that is needed to create PVCs for all available Manila share types. The Operator is installed in the openshift-cluster-csi-drivers namespace.
  • The Manila CSI driver enables you to create and mount Manila PVs. The driver is installed in the openshift-manila-csi-driver namespace.

5.16.2. About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.

5.16.3. Manila CSI Driver Operator limitations

The following limitations apply to the Manila Container Storage Interface (CSI) Driver Operator:

Only NFS is supported
OpenStack Manila supports many network-attached storage protocols, such as NFS, CIFS, and CEPHFS, and these can be selectively enabled in the OpenStack cloud. The Manila CSI Driver Operator in OpenShift Container Platform only supports using the NFS protocol. If NFS is not available and enabled in the underlying OpenStack cloud, you cannot use the Manila CSI Driver Operator to provision storage for OpenShift Container Platform.
Snapshots are not supported if the back end is CephFS-NFS
To take snapshots of persistent volumes (PVs) and revert volumes to snapshots, you must ensure that the Manila share type that you are using supports these features. A Red Hat OpenStack administrator must enable support for snapshots (share type extra-spec snapshot_support) and for creating shares from snapshots (share type extra-spec create_share_from_snapshot_support) in the share type associated with the storage class you intend to use.
FSGroups are not supported
Since Manila CSI provides shared file systems for access by multiple readers and multiple writers, it does not support the use of FSGroups. This is true even for persistent volumes created with the ReadWriteOnce access mode. It is therefore important not to specify the fsType attribute in any storage class that you manually create for use with Manila CSI Driver.
Important

In Red Hat OpenStack Platform 16.x and 17.x, the Shared File Systems service (Manila) with CephFS through NFS fully supports serving shares to OpenShift Container Platform through the Manila CSI. However, this solution is not intended for massive scale. Be sure to review important recommendations in CephFS NFS Manila-CSI Workload Recommendations for Red Hat OpenStack Platform.

5.16.4. Dynamically provisioning Manila CSI volumes

OpenShift Container Platform installs a storage class for each available Manila share type.

The YAML files that are created are completely decoupled from Manila and from its Container Storage Interface (CSI) plugin. As an application developer, you can dynamically provision ReadWriteMany (RWX) storage and deploy pods with applications that safely consume the storage using YAML manifests.

You can use the same pod and persistent volume claim (PVC) definitions on-premise that you use with OpenShift Container Platform on AWS, GCP, Azure, and other platforms, with the exception of the storage class reference in the PVC definition.

Note

Manila service is optional. If the service is not enabled in Red Hat OpenStack Platform (RHOSP), the Manila CSI driver is not installed and the storage classes for Manila are not created.

Prerequisites

  • RHOSP is deployed with appropriate Manila share infrastructure so that it can be used to dynamically provision and mount volumes in OpenShift Container Platform.

Procedure (UI)

To dynamically create a Manila CSI volume using the web console:

  1. In the OpenShift Container Platform console, click Storage Persistent Volume Claims.
  2. In the persistent volume claims overview, click Create Persistent Volume Claim.
  3. Define the required options on the resulting page.

    1. Select the appropriate storage class.
    2. Enter a unique name for the storage claim.
    3. Select the access mode to specify read and write access for the PVC you are creating.

      Important

      Use RWX if you want the persistent volume (PV) that fulfills this PVC to be mounted to multiple pods on multiple nodes in the cluster.

  4. Define the size of the storage claim.
  5. Click Create to create the persistent volume claim and generate a persistent volume.

Procedure (CLI)

To dynamically create a Manila CSI volume using the command-line interface (CLI):

  1. Create and save a file with the PersistentVolumeClaim object described by the following YAML:

    pvc-manila.yaml

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-manila
    spec:
      accessModes: 1
        - ReadWriteMany
      resources:
        requests:
          storage: 10Gi
      storageClassName: csi-manila-gold 2

    1
    Use RWX if you want the persistent volume (PV) that fulfills this PVC to be mounted to multiple pods on multiple nodes in the cluster.
    2
    The name of the storage class that provisions the storage back end. Manila storage classes are provisioned by the Operator and have the csi-manila- prefix.
  2. Create the object you saved in the previous step by running the following command:

    $ oc create -f pvc-manila.yaml

    A new PVC is created.

  3. To verify that the volume was created and is ready, run the following command:

    $ oc get pvc pvc-manila

    The pvc-manila shows that it is Bound.

You can now use the new PVC to configure a pod.

Additional resources

5.17. Red Hat Virtualization CSI Driver Operator

5.17.1. Overview

OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Red Hat Virtualization (RHV).

Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver.

To create CSI-provisioned PVs that mount to RHV storage assets, OpenShift Container Platform installs the oVirt CSI Driver Operator and the oVirt CSI driver by default in the openshift-cluster-csi-drivers namespace.

  • The oVirt CSI Driver Operator provides a default StorageClass object that you can use to create Persistent Volume Claims (PVCs).
  • The oVirt CSI driver enables you to create and mount oVirt PVs.

5.17.2. About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.

Note

The oVirt CSI driver does not support snapshots.

5.17.3. Red Hat Virtualization (RHV) CSI driver storage class

OpenShift Container Platform creates a default object of type StorageClass named ovirt-csi-sc which is used for creating dynamically provisioned persistent volumes.

To create additional storage classes for different configurations, create and save a file with the StorageClass object described by the following sample YAML:

ovirt-storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: <storage_class_name>  1
  annotations:
    storageclass.kubernetes.io/is-default-class: "<boolean>"  2
provisioner: csi.ovirt.org
allowVolumeExpansion: <boolean> 3
reclaimPolicy: Delete 4
volumeBindingMode: Immediate 5
parameters:
  storageDomainName: <rhv-storage-domain-name> 6
  thinProvisioning: "<boolean>"  7
  csi.storage.k8s.io/fstype: <file_system_type> 8

1
Name of the storage class.
2
Set to false if the storage class is the default storage class in the cluster. If set to true, the existing default storage class must be edited and set to false.
3
true enables dynamic volume expansion, false prevents it. true is recommended.
4
Dynamically provisioned persistent volumes of this storage class are created with this reclaim policy. This default policy is Delete.
5
Indicates how to provision and bind PersistentVolumeClaims. When not set, VolumeBindingImmediate is used. This field is only applied by servers that enable the VolumeScheduling feature.
6
The RHV storage domain name to use.
7
If true, the disk is thin provisioned. If false, the disk is preallocated. Thin provisioning is recommended.
8
Optional: File system type to be created. Possible values: ext4 (default) or xfs.

5.17.4. Creating a persistent volume on RHV

When you create a PersistentVolumeClaim (PVC) object, OpenShift Container Platform provisions a new persistent volume (PV) and creates a PersistentVolume object.

Prerequisites

  • You are logged in to a running OpenShift Container Platform cluster.
  • You provided the correct RHV credentials in ovirt-credentials secret.
  • You have installed the oVirt CSI driver.
  • You have defined at least one storage class.

Procedure

  • If you are using the web console to dynamically create a persistent volume on RHV:

    1. In the OpenShift Container Platform console, click Storage Persistent Volume Claims.
    2. In the persistent volume claims overview, click Create Persistent Volume Claim.
    3. Define the required options on the resulting page.
    4. Select the appropriate StorageClass object, which is ovirt-csi-sc by default.
    5. Enter a unique name for the storage claim.
    6. Select the access mode. Currently, RWO (ReadWriteOnce) is the only supported access mode.
    7. Define the size of the storage claim.
    8. Select the Volume Mode:

      Filesystem: Mounted into pods as a directory. This mode is the default.

      Block: Block device, without any file system on it

    9. Click Create to create the PersistentVolumeClaim object and generate a PersistentVolume object.
  • If you are using the command-line interface (CLI) to dynamically create a RHV CSI volume:

    1. Create and save a file with the PersistentVolumeClaim object described by the following sample YAML:

      pvc-ovirt.yaml

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: pvc-ovirt
      spec:
        storageClassName: ovirt-csi-sc 1
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: <volume size>  2
        volumeMode: <volume mode> 3

      1
      Name of the required storage class.
      2
      Volume size in GiB.
      3
      Supported options:
      • Filesystem: Mounted into pods as a directory. This mode is the default.
      • Block: Block device, without any file system on it.
    2. Create the object you saved in the previous step by running the following command:

      $ oc create -f pvc-ovirt.yaml
    3. To verify that the volume was created and is ready, run the following command:

      $ oc get pvc pvc-ovirt

      The pvc-ovirt shows that it is Bound.

5.18. VMware vSphere CSI Driver Operator

5.18.1. Overview

OpenShift Container Platform can provision persistent volumes (PVs) using the Container Storage Interface (CSI) VMware vSphere driver for Virtual Machine Disk (VMDK) volumes.

Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.

To create CSI-provisioned persistent volumes (PVs) that mount to vSphere storage assets, OpenShift Container Platform installs the vSphere CSI Driver Operator and the vSphere CSI driver by default in the openshift-cluster-csi-drivers namespace.

  • vSphere CSI Driver Operator: The Operator provides a storage class, called thin-csi, that you can use to create persistent volumes claims (PVCs). The vSphere CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage.
  • vSphere CSI driver: The driver enables you to create and mount vSphere PVs. In OpenShift Container Platform 4.11, the driver version is 2.5.1. The vSphere CSI driver supports all of the file systems supported by the underlying Red Hat Core OS release, including XFS and Ext4. For more information about supported file systems, see Overview of available file systems.
Important

OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision vSphere storage.

In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.

After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform.

Note

The vSphere CSI Driver supports dynamic and static provisioning. When using static provisioning in the PV specifications, do not use the key storage.kubernetes.io/csiProvisionerIdentity in csi.volumeAttributes because this key indicates dynamically provisioned PVs.

5.18.2. About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.

5.18.3. vSphere storage policy

The vSphere CSI Driver Operator storage class uses vSphere’s storage policy. OpenShift Container Platform automatically creates a storage policy that targets datastore configured in cloud configuration:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: thin-csi
provisioner: csi.vsphere.vmware.com
parameters:
  StoragePolicyName: "$openshift-storage-policy-xxxx"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: false
reclaimPolicy: Delete

5.18.4. ReadWriteMany vSphere volume support

If the underlying vSphere environment supports the vSAN file service, then vSphere Container Storage Interface (CSI) Driver Operator installed by OpenShift Container Platform supports provisioning of ReadWriteMany (RWX) volumes. If vSAN file service is not configured, then ReadWriteOnce (RWO) is the only access mode available. If you do not have vSAN file service configured, and you request RWX, the volume fails to get created and an error is logged.

For more information about configuring the vSAN file service in your environment, see vSAN File Service.

You can request RWX volumes by making the following persistent volume claim (PVC):

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myclaim
spec:
  resources:
    requests:
      storage: 1Gi
  accessModes:
     - ReadWriteMany
  storageClassName: thin-csi

Requesting a PVC of the RWX volume type should result in provisioning of persistent volumes (PVs) backed by the vSAN file service.

5.18.5. VMware vSphere CSI Driver Operator requirements

To install the vSphere CSI Driver Operator, the following requirements must be met:

  • VMware vSphere version 7.0 Update 1 or later
  • Virtual machines of hardware version 15 or later
  • No third-party vSphere CSI driver already installed in the cluster
Important

If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. If you continue with the third-party vSphere CSI driver when upgrading to the next major version of OpenShift Container Platform, the oc CLI prompts you with the following message:

VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable:
found existing unsupported csi.vsphere.vmware.com driver

The previous message informs you that Red Hat does not support the third-party vSphere CSI driver during an OpenShift Container Platform upgrade operation. You can choose to ignore this message and continue with the upgrade operation.

To remove a third-party CSI driver, see Removing a third-party vSphere CSI Driver.

5.18.6. Removing a third-party vSphere CSI Driver Operator

OpenShift Container Platform 4.11 includes a built-in version of the vSphere Container Storage Interface (CSI) Operator Driver that is supported by Red Hat.

If you have installed a vSphere CSI driver provided by the community or another vendor, which is considered a third-party vSphere CSI driver, and you continue with upgrading to the next major version of OpenShift Container Platform, the oc CLI prompts you with the following message:

VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable:
found existing unsupported csi.vsphere.vmware.com driver

The previous message informs you that Red Hat does not support the third-party vSphere CSI driver during an OpenShift Container Platform upgrade operation. You can choose to ignore this message and continue with the upgrade operation.

The instructions outlined in the procedure show how to uninstall a third-party vSphere CSI Driver. Consult the vendor’s or community provider’s uninstall guide for more detailed instructions on removing the driver and its components.

Important

When removing a third-party vSphere CSI driver, you do not need to delete the associated persistent volume (PV) objects. Data loss typically does not occur, but Red Hat does not take any responsibility if data loss does occur.

After you have removed the third-party vSphere CSI Driver from the OpenShift Container Platform cluster, installation of Red Hat’s vSphere CSI Operator Driver automatically resumes. If you had existing vSphere CSI PV objects, their lifecycle is now managed by Red Hat’s vSphere CSI Operator Driver.

Procedure

  1. Delete the third-party vSphere CSI Driver (VMware vSphere Container Storage Plugin) Deployment and Daemonset objects.
  2. Delete the configmap and secret objects that were installed previously with the third-party vSphere CSI Driver.
  3. Delete the third-party vSphere CSI driver CSIDriver object:

    ~ $ oc delete CSIDriver csi.vsphere.vmware.com
    csidriver.storage.k8s.io "csi.vsphere.vmware.com" deleted

After you have removed the third-party vSphere CSI Driver from the OpenShift Container Platform cluster, installation of Red Hat’s vSphere CSI Driver Operator automatically resumes, and any conditions that could block upgrades to OpenShift Container Platform 4.11, or later, are automatically removed. If you had existing vSphere CSI PV objects, their lifecycle is now managed by Red Hat’s vSphere CSI Driver Operator.

5.18.7. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.