Este contenido no está disponible en el idioma seleccionado.

Chapter 5. Using Container Storage Interface (CSI)


5.1. Configuring CSI volumes

The Container Storage Interface (CSI) allows OpenShift Dedicated to consume storage from storage back ends that implement the CSI interface as persistent storage.

Note

OpenShift Dedicated 4 supports version 1.6.0 of the CSI specification.

5.1.1. CSI architecture

CSI drivers are typically shipped as container images. These containers are not aware of OpenShift Dedicated where they run. To use CSI-compatible storage back end in OpenShift Dedicated, the cluster administrator must deploy several components that serve as a bridge between OpenShift Dedicated and the storage driver.

The following diagram provides a high-level overview about the components running in pods in the OpenShift Dedicated cluster.

Architecture of CSI components

It is possible to run multiple CSI drivers for different storage back ends. Each driver needs its own external controllers deployment and daemon set with the driver and CSI registrar.

5.1.1.1. External CSI controllers

External CSI controllers is a deployment that deploys one or more pods with five containers:

  • The snapshotter container watches VolumeSnapshot and VolumeSnapshotContent objects and is responsible for the creation and deletion of VolumeSnapshotContent object.
  • The resizer container is a sidecar container that watches for PersistentVolumeClaim updates and triggers ControllerExpandVolume operations against a CSI endpoint if you request more storage on PersistentVolumeClaim object.
  • An external CSI attacher container translates attach and detach calls from OpenShift Dedicated to respective ControllerPublish and ControllerUnpublish calls to the CSI driver.
  • An external CSI provisioner container that translates provision and delete calls from OpenShift Dedicated to respective CreateVolume and DeleteVolume calls to the CSI driver.
  • A CSI driver container.

The CSI attacher and CSI provisioner containers communicate with the CSI driver container using UNIX Domain Sockets, ensuring that no CSI communication leaves the pod. The CSI driver is not accessible from outside of the pod.

Note

The attach, detach, provision, and delete operations typically require the CSI driver to use credentials to the storage backend. Run the CSI controller pods on infrastructure nodes so the credentials are never leaked to user processes, even in the event of a catastrophic security breach on a compute node.

Note

The external attacher must also run for CSI drivers that do not support third-party attach or detach operations. The external attacher will not issue any ControllerPublish or ControllerUnpublish operations to the CSI driver. However, it still must run to implement the necessary OpenShift Dedicated attachment API.

5.1.1.2. CSI driver daemon set

The CSI driver daemon set runs a pod on every node that allows OpenShift Dedicated to mount storage provided by the CSI driver to the node and use it in user workloads (pods) as persistent volumes (PVs). The pod with the CSI driver installed contains the following containers:

  • A CSI driver registrar, which registers the CSI driver into the openshift-node service running on the node. The openshift-node process running on the node then directly connects with the CSI driver using the UNIX Domain Socket available on the node.
  • A CSI driver.

The CSI driver deployed on the node should have as few credentials to the storage back end as possible. OpenShift Dedicated will only use the node plugin set of CSI calls such as NodePublish/NodeUnpublish and NodeStage/NodeUnstage, if these calls are implemented.

5.1.2. CSI drivers supported by OpenShift Dedicated

OpenShift Dedicated installs certain CSI drivers by default, giving users storage options that are not possible with in-tree volume plugins.

To create CSI-provisioned persistent volumes that mount to these supported storage assets, OpenShift Dedicated installs the necessary CSI driver Operator, the CSI driver, and the required storage class by default. For more details about the default namespace of the Operator and driver, see the documentation for the specific CSI Driver Operator.

Important

The AWS EFS and GCP Filestore CSI drivers are not installed by default, and must be installed manually. For instructions on installing the AWS EFS CSI driver, see Setting up AWS Elastic File Service CSI Driver Operator. For instructions on installing the GCP Filestore CSI driver, see Google Compute Platform Filestore CSI Driver Operator.

The following table describes the CSI drivers that are supported by OpenShift Dedicated and which CSI features they support, such as volume snapshots and resize.

Important

If your CSI driver is not listed in the following table, you must follow the installation instructions provided by your CSI storage vendor to use their supported CSI features.

Table 5.1. Supported CSI drivers and features in OpenShift Dedicated
CSI driverCSI volume snapshotsCSI cloningCSI resizeInline ephemeral volumes

AWS EBS

 ✅

 ✅

AWS EFS

Google Compute Platform (GCP) persistent disk (PD)

  ✅

  ✅

 ✅

GCP Filestore

 ✅

 ✅

LVM Storage

 ✅

 ✅

 ✅

5.1.3. Dynamic provisioning

Dynamic provisioning of persistent storage depends on the capabilities of the CSI driver and underlying storage back end. The provider of the CSI driver should document how to create a storage class in OpenShift Dedicated and the parameters available for configuration.

The created storage class can be configured to enable dynamic provisioning.

Procedure

  • Create a default storage class that ensures all PVCs that do not require any special storage class are provisioned by the installed CSI driver.

    # oc create -f - << EOF
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: <storage-class> 1
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
    provisioner: <provisioner-name> 2
    parameters:
    EOF
    1
    The name of the storage class that will be created.
    2
    The name of the CSI driver that has been installed.

5.1.4. Example using the CSI driver

The following example installs a default MySQL template without any changes to the template.

Prerequisites

  • The CSI driver has been deployed.
  • A storage class has been created for dynamic provisioning.

Procedure

  • Create the MySQL template:

    # oc new-app mysql-persistent

    Example output

    --> Deploying template "openshift/mysql-persistent" to project default
    ...

    # oc get pvc

    Example output

    NAME              STATUS    VOLUME                                   CAPACITY
    ACCESS MODES   STORAGECLASS   AGE
    mysql             Bound     kubernetes-dynamic-pv-3271ffcb4e1811e8   1Gi
    RWO            cinder         3s

5.2. Managing the default storage class

5.2.1. Overview

Managing the default storage class allows you to accomplish several different objectives:

  • Enforcing static provisioning by disabling dynamic provisioning.
  • When you have other preferred storage classes, preventing the storage operator from re-creating the initial default storage class.
  • Renaming, or otherwise changing, the default storage class

To accomplish these objectives, you change the setting for the spec.storageClassState field in the ClusterCSIDriver object. The possible settings for this field are:

  • Managed: (Default) The Container Storage Interface (CSI) operator is actively managing its default storage class, so that most manual changes made by a cluster administrator to the default storage class are removed, and the default storage class is continuously re-created if you attempt to manually delete it.
  • Unmanaged: You can modify the default storage class. The CSI operator is not actively managing storage classes, so that it is not reconciling the default storage class it creates automatically.
  • Removed: The CSI operators deletes the default storage class.

5.2.2. Managing the default storage class using the web console

Prerequisites

  • Access to the OpenShift Dedicated web console.
  • Access to the cluster with cluster-admin privileges.

Procedure

To manage the default storage class using the web console:

  1. Log in to the web console.
  2. Click Administration > CustomResourceDefinitions.
  3. On the CustomResourceDefinitions page, type clustercsidriver to find the ClusterCSIDriver object.
  4. Click ClusterCSIDriver, and then click the Instances tab.
  5. Click the name of the desired instance, and then click the YAML tab.
  6. Add the spec.storageClassState field with a value of Managed, Unmanaged, or Removed.

    Example

    ...
    spec:
      driverConfig:
        driverType: ''
      logLevel: Normal
      managementState: Managed
      observedConfig: null
      operatorLogLevel: Normal
      storageClassState: Unmanaged 1
    ...

    1
    spec.storageClassState field set to "Unmanaged"
  7. Click Save.

5.2.3. Managing the default storage class using the CLI

Prerequisites

  • Access to the cluster with cluster-admin privileges.

Procedure

To manage the storage class using the CLI, run the following command:

oc patch clustercsidriver $DRIVERNAME --type=merge -p "{\"spec\":{\"storageClassState\":\"${STATE}\"}}" 1
1
Where ${STATE} is "Removed" or "Managed" or "Unmanaged".

Where $DRIVERNAME is the provisioner name. You can find the provisioner name by running the command oc get sc.

5.2.4. Absent or multiple default storage classes

5.2.4.1. Multiple default storage classes

Multiple default storage classes can occur if you mark a non-default storage class as default and do not unset the existing default storage class, or you create a default storage class when a default storage class is already present. With multiple default storage classes present, any persistent volume claim (PVC) requesting the default storage class (pvc.spec.storageClassName=nil) gets the most recently created default storage class, regardless of the default status of that storage class, and the administrator receives an alert in the alerts dashboard that there are multiple default storage classes, MultipleDefaultStorageClasses.

5.2.4.2. Absent default storage class

There are two possible scenarios where PVCs can attempt to use a non-existent default storage class:

  • An administrator removes the default storage class or marks it as non-default, and then a user creates a PVC requesting the default storage class.
  • During installation, the installer creates a PVC requesting the default storage class, which has not yet been created.

In the preceding scenarios, PVCs remain in the pending state indefinitely. To resolve this situation, create a default storage class or declare one of the existing storage classes as the default. As soon as the default storage class is created or declared, the PVCs get the new default storage class. If possible, the PVCs eventually bind to statically or dynamically provisioned PVs as usual, and move out of the pending state.

5.2.5. Changing the default storage class

Use the following procedure to change the default storage class.

For example, if you have two defined storage classes, gp3 and standard, and you want to change the default storage class from gp3 to standard.

Prerequisites

  • Access to the cluster with cluster-admin privileges.

Procedure

To change the default storage class:

  1. List the storage classes:

    $ oc get storageclass

    Example output

    NAME                 TYPE
    gp3 (default)        kubernetes.io/aws-ebs 1
    standard             kubernetes.io/aws-ebs

    1
    (default) indicates the default storage class.
  2. Make the desired storage class the default.

    For the desired storage class, set the storageclass.kubernetes.io/is-default-class annotation to true by running the following command:

    $ oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
    Note

    You can have multiple default storage classes for a short time. However, you should ensure that only one default storage class exists eventually.

    With multiple default storage classes present, any persistent volume claim (PVC) requesting the default storage class (pvc.spec.storageClassName=nil) gets the most recently created default storage class, regardless of the default status of that storage class, and the administrator receives an alert in the alerts dashboard that there are multiple default storage classes, MultipleDefaultStorageClasses.

  3. Remove the default storage class setting from the old default storage class.

    For the old default storage class, change the value of the storageclass.kubernetes.io/is-default-class annotation to false by running the following command:

    $ oc patch storageclass gp3 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
  4. Verify the changes:

    $ oc get storageclass

    Example output

    NAME                 TYPE
    gp3                  kubernetes.io/aws-ebs
    standard (default)   kubernetes.io/aws-ebs

5.3. AWS Elastic Block Store CSI Driver Operator

5.3.1. Overview

OpenShift Dedicated is capable of provisioning persistent volumes (PVs) using the AWS EBS CSI driver.

Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver.

To create CSI-provisioned PVs that mount to AWS EBS storage assets, OpenShift Dedicated installs the AWS EBS CSI Driver Operator (a Red Hat operator) and the AWS EBS CSI driver by default in the openshift-cluster-csi-drivers namespace.

5.3.2. About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OpenShift Dedicated users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.

Important

OpenShift Dedicated defaults to using the CSI plugin to provision Amazon Elastic Block Store (Amazon EBS) storage.

For information about dynamically provisioning AWS EBS persistent volumes in OpenShift Dedicated, see Persistent storage using Amazon Elastic Block Store.

5.4. AWS Elastic File Service CSI Driver Operator

Important

This procedure is specific to the AWS EFS CSI Driver Operator (a Red Hat Operator), which is only applicable for OpenShift Dedicated 4.10 and later versions.

5.4.1. Overview

OpenShift Dedicated is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AWS Elastic File Service (EFS).

Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.

After installing the AWS EFS CSI Driver Operator, OpenShift Dedicated installs the AWS EFS CSI Operator and the AWS EFS CSI driver by default in the openshift-cluster-csi-drivers namespace. This allows the AWS EFS CSI Driver Operator to create CSI-provisioned PVs that mount to AWS EFS assets.

  • The AWS EFS CSI Driver Operator, after being installed, does not create a storage class by default to use to create persistent volume claims (PVCs). However, you can manually create the AWS EFS StorageClass. The AWS EFS CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand. This eliminates the need for cluster administrators to pre-provision storage.
  • The AWS EFS CSI driver enables you to create and mount AWS EFS PVs.
Note

AWS EFS only supports regional volumes, not zonal volumes.

5.4.2. About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OpenShift Dedicated users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.

5.4.3. Setting up the AWS EFS CSI Driver Operator

  1. If you are using AWS EFS with AWS Secure Token Service (STS), obtain a role Amazon Resource Name (ARN) for STS. This is required for installing the AWS EFS CSI Driver Operator.
  2. Install the AWS EFS CSI Driver Operator.
  3. Install the AWS EFS CSI Driver.

5.4.3.1. Obtaining a role Amazon Resource Name for Security Token Service

This procedure explains how to obtain a role Amazon Resource Name (ARN) to configure the AWS EFS CSI Driver Operator with OpenShift Dedicated on AWS Security Token Service (STS).

Important

Perform this procedure before you install the AWS EFS CSI Driver Operator (see Installing the AWS EFS CSI Driver Operator procedure).

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.
  • AWS account credentials

Procedure

  1. Create an IAM policy JSON file with the following content:

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": [
            "elasticfilesystem:DescribeAccessPoints",
            "elasticfilesystem:DescribeFileSystems",
            "elasticfilesystem:DescribeMountTargets",
            "ec2:DescribeAvailabilityZones",
            "elasticfilesystem:TagResource"
          ],
          "Resource": "*"
        },
        {
          "Effect": "Allow",
          "Action": [
            "elasticfilesystem:CreateAccessPoint"
          ],
          "Resource": "*",
          "Condition": {
            "StringLike": {
              "aws:RequestTag/efs.csi.aws.com/cluster": "true"
            }
          }
        },
        {
          "Effect": "Allow",
          "Action": "elasticfilesystem:DeleteAccessPoint",
          "Resource": "*",
          "Condition": {
            "StringEquals": {
              "aws:ResourceTag/efs.csi.aws.com/cluster": "true"
            }
          }
        }
      ]
    }
  2. Create an IAM trust JSON file with the following content:

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Federated": "arn:aws:iam::<your_aws_account_ID>:oidc-provider/<openshift_oidc_provider>"  1
          },
          "Action": "sts:AssumeRoleWithWebIdentity",
          "Condition": {
            "StringEquals": {
              "<openshift_oidc_provider>:sub": [  2
                "system:serviceaccount:openshift-cluster-csi-drivers:aws-efs-csi-driver-operator",
                "system:serviceaccount:openshift-cluster-csi-drivers:aws-efs-csi-driver-controller-sa"
              ]
            }
          }
        }
      ]
    }
    1
    Specify your AWS account ID and the OpenShift OIDC provider endpoint.

    Obtain your AWS account ID by running the following command:

    $ aws sts get-caller-identity --query Account --output text

    Obtain the OpenShift OIDC endpoint by running the following command:

    $ openshift_oidc_provider=`oc get authentication.config.openshift.io cluster \
      -o json | jq -r .spec.serviceAccountIssuer | sed -e "s/^https:\/\///"`; \
      echo $openshift_oidc_provider
    2
    Specify the OpenShift OIDC endpoint again.
  3. Create the IAM role:

    ROLE_ARN=$(aws iam create-role \
      --role-name "<your_cluster_name>-aws-efs-csi-operator" \
      --assume-role-policy-document file://<your_trust_file_name>.json \
      --query "Role.Arn" --output text); echo $ROLE_ARN

    Copy the role ARN. You will need it when you install the AWS EFS CSI Driver Operator.

  4. Create the IAM policy:

    POLICY_ARN=$(aws iam create-policy \
      --policy-name "<your_cluster_name>-aws-efs-csi" \
      --policy-document file://<your_policy_file_name>.json \
      --query 'Policy.Arn' --output text); echo $POLICY_ARN
  5. Attach the IAM policy to the IAM role:

    $ aws iam attach-role-policy \
      --role-name "<your_cluster_name>-aws-efs-csi-operator" \
      --policy-arn $POLICY_ARN

5.4.3.2. Installing the AWS EFS CSI Driver Operator

The AWS EFS CSI Driver Operator (a Red Hat Operator) is not installed in OpenShift Dedicated by default. Use the following procedure to install and configure the AWS EFS CSI Driver Operator in your cluster.

Prerequisites

  • Access to the OpenShift Dedicated web console.

Procedure

To install the AWS EFS CSI Driver Operator from the web console:

  1. Log in to the web console.
  2. Install the AWS EFS CSI Operator:

    1. Click Operators OperatorHub.
    2. Locate the AWS EFS CSI Operator by typing AWS EFS CSI in the filter box.
    3. Click the AWS EFS CSI Driver Operator button.
    Important

    Be sure to select the AWS EFS CSI Driver Operator and not the AWS EFS Operator. The AWS EFS Operator is a community Operator and is not supported by Red Hat.

    1. On the AWS EFS CSI Driver Operator page, click Install.
    2. On the Install Operator page, ensure that:

      • If you are using AWS EFS with AWS Secure Token Service (STS), in the role ARN field, enter the ARN role copied from the last step of the Obtaining a role Amazon Resource Name for Security Token Service procedure.
      • All namespaces on the cluster (default) is selected.
      • Installed Namespace is set to openshift-cluster-csi-drivers.
    3. Click Install.

      After the installation finishes, the AWS EFS CSI Operator is listed in the Installed Operators section of the web console.

5.4.3.3. Installing the AWS EFS CSI Driver

After installing the AWS EFS CSI Driver Operator (a Red Hat operator), you install the AWS EFS CSI driver.

Prerequisites

  • Access to the OpenShift Dedicated web console.

Procedure

  1. Click Administration CustomResourceDefinitions ClusterCSIDriver.
  2. On the Instances tab, click Create ClusterCSIDriver.
  3. Use the following YAML file:

    apiVersion: operator.openshift.io/v1
    kind: ClusterCSIDriver
    metadata:
        name: efs.csi.aws.com
    spec:
      managementState: Managed
  4. Click Create.
  5. Wait for the following Conditions to change to a "True" status:

    • AWSEFSDriverNodeServiceControllerAvailable
    • AWSEFSDriverControllerServiceControllerAvailable

5.4.4. Creating the AWS EFS storage class

Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.

The AWS EFS CSI Driver Operator (a Red Hat operator), after being installed, does not create a storage class by default. However, you can manually create the AWS EFS storage class.

5.4.4.1. Creating the AWS EFS storage class using the console

Procedure

  1. In the OpenShift Dedicated console, click Storage StorageClasses.
  2. On the StorageClasses page, click Create StorageClass.
  3. On the StorageClass page, perform the following steps:

    1. Enter a name to reference the storage class.
    2. Optional: Enter the description.
    3. Select the reclaim policy.
    4. Select efs.csi.aws.com from the Provisioner drop-down list.
    5. Optional: Set the configuration parameters for the selected provisioner.
  4. Click Create.

5.4.4.2. Creating the AWS EFS storage class using the CLI

Procedure

  • Create a StorageClass object:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: efs-sc
    provisioner: efs.csi.aws.com
    parameters:
      provisioningMode: efs-ap 1
      fileSystemId: fs-a5324911 2
      directoryPerms: "700" 3
      gidRangeStart: "1000" 4
      gidRangeEnd: "2000" 5
      basePath: "/dynamic_provisioning" 6
    1
    provisioningMode must be efs-ap to enable dynamic provisioning.
    2
    fileSystemId must be the ID of the EFS volume created manually.
    3
    directoryPerms is the default permission of the root directory of the volume. In this example, the volume is accessible only by the owner.
    4 5
    gidRangeStart and gidRangeEnd set the range of POSIX Group IDs (GIDs) that are used to set the GID of the AWS access point. If not specified, the default range is 50000-7000000. Each provisioned volume, and thus AWS access point, is assigned a unique GID from this range.
    6
    basePath is the directory on the EFS volume that is used to create dynamically provisioned volumes. In this case, a PV is provisioned as “/dynamic_provisioning/<random uuid>” on the EFS volume. Only the subdirectory is mounted to pods that use the PV.
    Note

    A cluster admin can create several StorageClass objects, each using a different EFS volume.

5.4.5. Creating and configuring access to EFS volumes in AWS

This procedure explains how to create and configure EFS volumes in AWS so that you can use them in OpenShift Dedicated.

Prerequisites

  • AWS account credentials

Procedure

To create and configure access to an EFS volume in AWS:

  1. On the AWS console, open https://console.aws.amazon.com/efs.
  2. Click Create file system:

    • Enter a name for the file system.
    • For Virtual Private Cloud (VPC), select your OpenShift Dedicated’s' virtual private cloud (VPC).
    • Accept default settings for all other selections.
  3. Wait for the volume and mount targets to finish being fully created:

    1. Go to https://console.aws.amazon.com/efs#/file-systems.
    2. Click your volume, and on the Network tab wait for all mount targets to become available (~1-2 minutes).
  4. On the Network tab, copy the Security Group ID (you will need this in the next step).
  5. Go to https://console.aws.amazon.com/ec2/v2/home#SecurityGroups, and find the Security Group used by the EFS volume.
  6. On the Inbound rules tab, click Edit inbound rules, and then add a new rule with the following settings to allow OpenShift Dedicated nodes to access EFS volumes :

    • Type: NFS
    • Protocol: TCP
    • Port range: 2049
    • Source: Custom/IP address range of your nodes (for example: “10.0.0.0/16”)

      This step allows OpenShift Dedicated to use NFS ports from the cluster.

  7. Save the rule.

5.4.6. Dynamic provisioning for Amazon Elastic File Storage

The AWS EFS CSI driver supports a different form of dynamic provisioning than other CSI drivers. It provisions new PVs as subdirectories of a pre-existing EFS volume. The PVs are independent of each other. However, they all share the same EFS volume. When the volume is deleted, all PVs provisioned out of it are deleted too. The EFS CSI driver creates an AWS Access Point for each such subdirectory. Due to AWS AccessPoint limits, you can only dynamically provision 1000 PVs from a single StorageClass/EFS volume.

Important

Note that PVC.spec.resources is not enforced by EFS.

In the example below, you request 5 GiB of space. However, the created PV is limitless and can store any amount of data (like petabytes). A broken application, or even a rogue application, can cause significant expenses when it stores too much data on the volume.

Using monitoring of EFS volume sizes in AWS is strongly recommended.

Prerequisites

  • You have created Amazon Elastic File Storage (Amazon EFS) volumes.
  • You have created the AWS EFS storage class.

Procedure

To enable dynamic provisioning:

  • Create a PVC (or StatefulSet or Template) as usual, referring to the StorageClass created previously.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: test
    spec:
      storageClassName: efs-sc
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 5Gi

If you have problems setting up dynamic provisioning, see AWS EFS troubleshooting.

5.4.7. Creating static PVs with Amazon Elastic File Storage

It is possible to use an Amazon Elastic File Storage (Amazon EFS) volume as a single PV without any dynamic provisioning. The whole volume is mounted to pods.

Prerequisites

  • You have created Amazon EFS volumes.

Procedure

  • Create the PV using the following YAML file:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: efs-pv
    spec:
      capacity: 1
        storage: 5Gi
      volumeMode: Filesystem
      accessModes:
        - ReadWriteMany
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: efs.csi.aws.com
        volumeHandle: fs-ae66151a 2
        volumeAttributes:
          encryptInTransit: "false" 3
    1
    spec.capacity does not have any meaning and is ignored by the CSI driver. It is used only when binding to a PVC. Applications can store any amount of data to the volume.
    2
    volumeHandle must be the same ID as the EFS volume you created in AWS. If you are providing your own access point, volumeHandle should be <EFS volume ID>::<access point ID>. For example: fs-6e633ada::fsap-081a1d293f0004630.
    3
    If desired, you can disable encryption in transit. Encryption is enabled by default.

If you have problems setting up static PVs, see AWS EFS troubleshooting.

5.4.8. Amazon Elastic File Storage security

The following information is important for Amazon Elastic File Storage (Amazon EFS) security.

When using access points, for example, by using dynamic provisioning as described earlier, Amazon automatically replaces GIDs on files with the GID of the access point. In addition, EFS considers the user ID, group ID, and secondary group IDs of the access point when evaluating file system permissions. EFS ignores the NFS client’s IDs. For more information about access points, see https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html.

As a consequence, EFS volumes silently ignore FSGroup; OpenShift Dedicated is not able to replace the GIDs of files on the volume with FSGroup. Any pod that can access a mounted EFS access point can access any file on it.

Unrelated to this, encryption in transit is enabled by default. For more information, see https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html.

5.4.9. AWS EFS storage CSI usage metrics

5.4.9.1. Usage metrics overview

Amazon Web Services (AWS) Elastic File Service (EFS) storage Container Storage Interface (CSI) usage metrics allow you to monitor how much space is used by either dynamically or statically provisioned EFS volumes.

Important

This features is disabled by default, because turning on metrics can lead to performance degradation.

The AWS EFS usage metrics feature collects volume metrics in the AWS EFS CSI Driver by recursively walking through the files in the volume. Because this effort can degrade performance, administrators must explicitly enable this feature.

5.4.9.2. Enabling usage metrics using the web console

To enable Amazon Web Services (AWS) Elastic File Service (EFS) Storage Container Storage Interface (CSI) usage metrics using the web console:

  1. Click Administration > CustomResourceDefinitions.
  2. On the CustomResourceDefinitions page next to the Name dropdown box, type clustercsidriver.
  3. Click CRD ClusterCSIDriver.
  4. Click the YAML tab.
  5. Under spec.aws.efsVolumeMetrics.state, set the value to RecursiveWalk.

    RecursiveWalk indicates that volume metrics collection in the AWS EFS CSI Driver is performed by recursively walking through the files in the volume.

    Example ClusterCSIDriver efs.csi.aws.com YAML file

    spec:
        driverConfig:
            driverType: AWS
            aws:
                efsVolumeMetrics:
                  state: RecursiveWalk
                  recursiveWalk:
                    refreshPeriodMinutes: 100
                    fsRateLimit: 10

  6. Optional: To define how the recursive walk operates, you can also set the following fields:

    • refreshPeriodMinutes: Specifies the refresh frequency for volume metrics in minutes. If this field is left blank, a reasonable default is chosen, which is subject to change over time. The current default is 240 minutes. The valid range is 1 to 43,200 minutes.
    • fsRateLimit: Defines the rate limit for processing volume metrics in goroutines per file system. If this field is left blank, a reasonable default is chosen, which is subject to change over time. The current default is 5 goroutines. The valid range is 1 to 100 goroutines.
  7. Click Save.
Note

To disable AWS EFS CSI usage metrics, use the preceding procedure, but for spec.aws.efsVolumeMetrics.state, change the value from RecursiveWalk to Disabled.

5.4.9.3. Enabling usage metrics using the CLI

To enable Amazon Web Services (AWS) Elastic File Service (EFS) storage Container Storage Interface (CSI) usage metrics using the CLI:

  1. Edit ClusterCSIDriver by running the following command:

    $ oc edit clustercsidriver efs.csi.aws.com
  2. Under spec.aws.efsVolumeMetrics.state, set the value to RecursiveWalk.

    RecursiveWalk indicates that volume metrics collection in the AWS EFS CSI Driver is performed by recursively walking through the files in the volume.

    Example ClusterCSIDriver efs.csi.aws.com YAML file

    spec:
        driverConfig:
            driverType: AWS
            aws:
                efsVolumeMetrics:
                  state: RecursiveWalk
                  recursiveWalk:
                    refreshPeriodMinutes: 100
                    fsRateLimit: 10

  3. Optional: To define how the recursive walk operates, you can also set the following fields:

    • refreshPeriodMinutes: Specifies the refresh frequency for volume metrics in minutes. If this field is left blank, a reasonable default is chosen, which is subject to change over time. The current default is 240 minutes. The valid range is 1 to 43,200 minutes.
    • fsRateLimit: Defines the rate limit for processing volume metrics in goroutines per file system. If this field is left blank, a reasonable default is chosen, which is subject to change over time. The current default is 5 goroutines. The valid range is 1 to 100 goroutines.
  4. Save the changes to the efs.csi.aws.com object.
Note

To disable AWS EFS CSI usage metrics, use the preceding procedure, but for spec.aws.efsVolumeMetrics.state, change the value from RecursiveWalk to Disabled.

5.4.10. Amazon Elastic File Storage troubleshooting

The following information provides guidance on how to troubleshoot issues with Amazon Elastic File Storage (Amazon EFS):

  • The AWS EFS Operator and CSI driver run in namespace openshift-cluster-csi-drivers.
  • To initiate gathering of logs of the AWS EFS Operator and CSI driver, run the following command:

    $ oc adm must-gather
    [must-gather      ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5
    [must-gather      ] OUT namespace/openshift-must-gather-xm4wq created
    [must-gather      ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-2bd8x created
    [must-gather      ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 created
  • To show AWS EFS Operator errors, view the ClusterCSIDriver status:

    $ oc get clustercsidriver efs.csi.aws.com -o yaml
  • If a volume cannot be mounted to a pod (as shown in the output of the following command):

    $ oc describe pod
    ...
      Type     Reason       Age    From               Message
      ----     ------       ----   ----               -------
      Normal   Scheduled    2m13s  default-scheduler  Successfully assigned default/efs-app to ip-10-0-135-94.ec2.internal
      Warning  FailedMount  13s    kubelet            MountVolume.SetUp failed for volume "pvc-d7c097e6-67ec-4fae-b968-7e7056796449" : rpc error: code = DeadlineExceeded desc = context deadline exceeded 1
      Warning  FailedMount  10s    kubelet            Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-9j477]: timed out waiting for the condition
    1
    Warning message indicating volume not mounted.

    This error is frequently caused by AWS dropping packets between an OpenShift Dedicated node and Amazon EFS.

    Check that the following are correct:

    • AWS firewall and Security Groups
    • Networking: port number and IP addresses

5.4.11. Uninstalling the AWS EFS CSI Driver Operator

All EFS PVs are inaccessible after uninstalling the AWS EFS CSI Driver Operator (a Red Hat operator).

Prerequisites

  • Access to the OpenShift Dedicated web console.

Procedure

To uninstall the AWS EFS CSI Driver Operator from the web console:

  1. Log in to the web console.
  2. Stop all applications that use AWS EFS PVs.
  3. Delete all AWS EFS PVs:

    1. Click Storage PersistentVolumeClaims.
    2. Select each PVC that is in use by the AWS EFS CSI Driver Operator, click the drop-down menu on the far right of the PVC, and then click Delete PersistentVolumeClaims.
  4. Uninstall the AWS EFS CSI driver:

    Note

    Before you can uninstall the Operator, you must remove the CSI driver first.

    1. Click Administration CustomResourceDefinitions ClusterCSIDriver.
    2. On the Instances tab, for efs.csi.aws.com, on the far left side, click the drop-down menu, and then click Delete ClusterCSIDriver.
    3. When prompted, click Delete.
  5. Uninstall the AWS EFS CSI Operator:

    1. Click Operators Installed Operators.
    2. On the Installed Operators page, scroll or type AWS EFS CSI into the Search by name box to find the Operator, and then click it.
    3. On the upper, right of the Installed Operators > Operator details page, click Actions Uninstall Operator.
    4. When prompted on the Uninstall Operator window, click the Uninstall button to remove the Operator from the namespace. Any applications deployed by the Operator on the cluster need to be cleaned up manually.

      After uninstalling, the AWS EFS CSI Driver Operator is no longer listed in the Installed Operators section of the web console.

Note

Before you can destroy a cluster (openshift-install destroy cluster), you must delete the EFS volume in AWS. An OpenShift Dedicated cluster cannot be destroyed when there is an EFS volume that uses the cluster’s VPC. Amazon does not allow deletion of such a VPC.

5.4.12. Additional resources

5.5. GCP PD CSI Driver Operator

5.5.1. Overview

OpenShift Dedicated can provision persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Google Cloud Platform (GCP) persistent disk (PD) storage.

Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver.

To create CSI-provisioned persistent volumes (PVs) that mount to GCP PD storage assets, OpenShift Dedicated installs the GCP PD CSI Driver Operator and the GCP PD CSI driver by default in the openshift-cluster-csi-drivers namespace.

  • GCP PD CSI Driver Operator: By default, the Operator provides a storage class that you can use to create PVCs. You can disable this default storage class if desired (see Managing the default storage class). You also have the option to create the GCP PD storage class as described in Persistent storage using GCE Persistent Disk.
  • GCP PD driver: The driver enables you to create and mount GCP PD PVs.

5.5.2. About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OpenShift Dedicated users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.

5.5.3. GCP PD CSI driver storage class parameters

The Google Cloud Platform (GCP) persistent disk (PD) Container Storage Interface (CSI) driver uses the CSI external-provisioner sidecar as a controller. This is a separate helper container that is deployed with the CSI driver. The sidecar manages persistent volumes (PVs) by triggering the CreateVolume operation.

The GCP PD CSI driver uses the csi.storage.k8s.io/fstype parameter key to support dynamic provisioning. The following table describes all the GCP PD CSI storage class parameters that are supported by OpenShift Dedicated.

Table 5.2. CreateVolume Parameters
ParameterValuesDefaultDescription

type

pd-ssd, pd-standard, or pd-balanced

pd-standard

Allows you to choose between standard PVs or solid-state-drive PVs.

The driver does not validate the value, thus all the possible values are accepted.

replication-type

none or regional-pd

none

Allows you to choose between zonal or regional PVs.

disk-encryption-kms-key

Fully qualified resource identifier for the key to use to encrypt new disks.

Empty string

Uses customer-managed encryption keys (CMEK) to encrypt new disks.

5.5.4. Creating a custom-encrypted persistent volume

When you create a PersistentVolumeClaim object, OpenShift Dedicated provisions a new persistent volume (PV) and creates a PersistentVolume object. You can add a custom encryption key in Google Cloud Platform (GCP) to protect a PV in your cluster by encrypting the newly created PV.

For encryption, the newly attached PV that you create uses customer-managed encryption keys (CMEK) on a cluster by using a new or existing Google Cloud Key Management Service (KMS) key.

Prerequisites

  • You are logged in to a running OpenShift Dedicated cluster.
  • You have created a Cloud KMS key ring and key version.

For more information about CMEK and Cloud KMS resources, see Using customer-managed encryption keys (CMEK).

Procedure

To create a custom-encrypted PV, complete the following steps:

  1. Create a storage class with the Cloud KMS key. The following example enables dynamic provisioning of encrypted volumes:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: csi-gce-pd-cmek
    provisioner: pd.csi.storage.gke.io
    volumeBindingMode: "WaitForFirstConsumer"
    allowVolumeExpansion: true
    parameters:
      type: pd-standard
      disk-encryption-kms-key: projects/<key-project-id>/locations/<location>/keyRings/<key-ring>/cryptoKeys/<key> 1
    1
    This field must be the resource identifier for the key that will be used to encrypt new disks. Values are case-sensitive. For more information about providing key ID values, see Retrieving a resource’s ID and Getting a Cloud KMS resource ID.
    Note

    You cannot add the disk-encryption-kms-key parameter to an existing storage class. However, you can delete the storage class and recreate it with the same name and a different set of parameters. If you do this, the provisioner of the existing class must be pd.csi.storage.gke.io.

  2. Deploy the storage class on your OpenShift Dedicated cluster using the oc command:

    $ oc describe storageclass csi-gce-pd-cmek

    Example output

    Name:                  csi-gce-pd-cmek
    IsDefaultClass:        No
    Annotations:           None
    Provisioner:           pd.csi.storage.gke.io
    Parameters:            disk-encryption-kms-key=projects/key-project-id/locations/location/keyRings/ring-name/cryptoKeys/key-name,type=pd-standard
    AllowVolumeExpansion:  true
    MountOptions:          none
    ReclaimPolicy:         Delete
    VolumeBindingMode:     WaitForFirstConsumer
    Events:                none

  3. Create a file named pvc.yaml that matches the name of your storage class object that you created in the previous step:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: podpvc
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: csi-gce-pd-cmek
      resources:
        requests:
          storage: 6Gi
    Note

    If you marked the new storage class as default, you can omit the storageClassName field.

  4. Apply the PVC on your cluster:

    $ oc apply -f pvc.yaml
  5. Get the status of your PVC and verify that it is created and bound to a newly provisioned PV:

    $ oc get pvc

    Example output

    NAME      STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
    podpvc    Bound     pvc-e36abf50-84f3-11e8-8538-42010a800002   10Gi       RWO            csi-gce-pd-cmek  9s

    Note

    If your storage class has the volumeBindingMode field set to WaitForFirstConsumer, you must create a pod to use the PVC before you can verify it.

Your CMEK-protected PV is now ready to use with your OpenShift Dedicated cluster.

5.5.5. Additional resources

5.6. Google Compute Platform Filestore CSI Driver Operator

5.6.1. Overview

OpenShift Dedicated is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Google Compute Platform (GCP) Filestore Storage.

Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.

To create CSI-provisioned PVs that mount to GCP Filestore Storage assets, you install the GCP Filestore CSI Driver Operator and the GCP Filestore CSI driver in the openshift-cluster-csi-drivers namespace.

  • The GCP Filestore CSI Driver Operator does not provide a storage class by default, but you can create one if needed. The GCP Filestore CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on demand, eliminating the need for cluster administrators to pre-provision storage.
  • The GCP Filestore CSI driver enables you to create and mount GCP Filestore PVs.

5.6.2. About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OpenShift Dedicated users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.

5.6.3. Installing the GCP Filestore CSI Driver Operator

The Google Compute Platform (GCP) Filestore Container Storage Interface (CSI) Driver Operator is not installed in OpenShift Dedicated by default. Use the following procedure to install the GCP Filestore CSI Driver Operator in your cluster.

Prerequisites

  • Access to the OpenShift Dedicated web console.

Procedure

To install the GCP Filestore CSI Driver Operator from the web console:

  1. Log in to the OpenShift Cluster Manager.
  2. Select your cluster.
  3. Click Open console and log in with your credentials.
  4. Enable the Filestore API in the GCE project by running the following command:

    $ gcloud services enable file.googleapis.com  --project <my_gce_project> 1
    1
    Replace <my_gce_project> with your Google Cloud project.

    You can also do this using Google Cloud web console.

  5. Install the GCP Filestore CSI Operator:

    1. Click Operators OperatorHub.
    2. Locate the GCP Filestore CSI Operator by typing GCP Filestore in the filter box.
    3. Click the GCP Filestore CSI Driver Operator button.
    4. On the GCP Filestore CSI Driver Operator page, click Install.
    5. On the Install Operator page, ensure that:

      • All namespaces on the cluster (default) is selected.
      • Installed Namespace is set to openshift-cluster-csi-drivers.
    6. Click Install.

      After the installation finishes, the GCP Filestore CSI Operator is listed in the Installed Operators section of the web console.

  6. Install the GCP Filestore CSI Driver:

    1. Click administration CustomResourceDefinitions ClusterCSIDriver.
    2. On the Instances tab, click Create ClusterCSIDriver.

      Use the following YAML file:

      apiVersion: operator.openshift.io/v1
      kind: ClusterCSIDriver
      metadata:
          name: filestore.csi.storage.gke.io
      spec:
        managementState: Managed
    3. Click Create.
    4. Wait for the following Conditions to change to a "true" status:

      • GCPFilestoreDriverCredentialsRequestControllerAvailable
      • GCPFilestoreDriverNodeServiceControllerAvailable
      • GCPFilestoreDriverControllerServiceControllerAvailable

5.6.4. Creating a storage class for GCP Filestore Storage

After installing the Operator, you should create a storage class for dynamic provisioning of Google Compute Platform (GCP) Filestore volumes.

Prerequisites

  • You are logged in to the running OpenShift Dedicated cluster.

Procedure

To create a storage class:

  1. Create a storage class using the following example YAML file:

    Example YAML file

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: filestore-csi
    provisioner: filestore.csi.storage.gke.io
    parameters:
      connect-mode: DIRECT_PEERING 1
      network: network-name 2
    allowVolumeExpansion: true
    volumeBindingMode: WaitForFirstConsumer

    1
    For a shared VPC, use the connect-mode parameter set to PRIVATE_SERVICE_ACCESS. For a non-shared VPC, the value is DIRECT_PEERING, which is the default setting.
    2
    Specify the name of the GCP virtual private cloud (VPC) network where Filestore instances should be created in.
  2. Specify the name of the VPC network where Filestore instances should be created in.

    It is recommended to specify the VPC network that the Filestore instances should be created in. If no VPC network is specified, the Container Storage Interface (CSI) driver tries to create the instances in the default VPC network of the project.

    On IPI installations, the VPC network name is typically the cluster name with the suffix "-network". However, on UPI installations, the VPC network name can be any value chosen by the user.

    For a shared VPC (connect-mode = PRIVATE_SERVICE_ACCESS), the network needs to be the full VPC name. For example: projects/shared-vpc-name/global/networks/gcp-filestore-network.

    You can find out the VPC network name by inspecting the MachineSets objects with the following command:

    $ oc -n openshift-machine-api get machinesets -o yaml | grep "network:"
                - network: gcp-filestore-network
    (...)

    In this example, the VPC network name in this cluster is "gcp-filestore-network".

5.6.5. Destroying clusters and GCP Filestore

Typically, if you destroy a cluster, the OpenShift Dedicated installer deletes all of the cloud resources that belong to that cluster. However, due to the special nature of the Google Compute Platform (GCP) Filestore resources, the automated cleanup process might not remove all of them in some rare cases.

Therefore, Red Hat recommends that you verify that all cluster-owned Filestore resources are deleted by the uninstall process.

Procedure

To ensure that all GCP Filestore PVCs have been deleted:

  1. Access your Google Cloud account using the GUI or CLI.
  2. Search for any resources with the kubernetes-io-cluster-${CLUSTER_ID}=owned label.

    Since the cluster ID is unique to the deleted cluster, there should not be any remaining resources with that cluster ID.

  3. In the unlikely case there are some remaining resources, delete them.

5.6.6. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

© 2024 Red Hat, Inc.