Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 5. Using Container Storage Interface (CSI)
5.1. Configuring CSI volumes
The Container Storage Interface (CSI) allows Red Hat OpenShift Service on AWS to consume storage from storage back ends that implement the CSI interface as persistent storage.
Red Hat OpenShift Service on AWS 4 supports version 1.6.0 of the CSI specification.
5.1.1. CSI architecture
CSI drivers are typically shipped as container images. These containers are not aware of Red Hat OpenShift Service on AWS where they run. To use CSI-compatible storage back end in Red Hat OpenShift Service on AWS, the cluster administrator must deploy several components that serve as a bridge between Red Hat OpenShift Service on AWS and the storage driver.
The following diagram provides a high-level overview about the components running in pods in the Red Hat OpenShift Service on AWS cluster.
It is possible to run multiple CSI drivers for different storage back ends. Each driver needs its own external controllers deployment and daemon set with the driver and CSI registrar.
5.1.1.1. External CSI controllers
External CSI controllers is a deployment that deploys one or more pods with five containers:
-
The snapshotter container watches
VolumeSnapshot
andVolumeSnapshotContent
objects and is responsible for the creation and deletion ofVolumeSnapshotContent
object. -
The resizer container is a sidecar container that watches for
PersistentVolumeClaim
updates and triggersControllerExpandVolume
operations against a CSI endpoint if you request more storage onPersistentVolumeClaim
object. -
An external CSI attacher container translates
attach
anddetach
calls from Red Hat OpenShift Service on AWS to respectiveControllerPublish
andControllerUnpublish
calls to the CSI driver. -
An external CSI provisioner container that translates
provision
anddelete
calls from Red Hat OpenShift Service on AWS to respectiveCreateVolume
andDeleteVolume
calls to the CSI driver. - A CSI driver container.
The CSI attacher and CSI provisioner containers communicate with the CSI driver container using UNIX Domain Sockets, ensuring that no CSI communication leaves the pod. The CSI driver is not accessible from outside of the pod.
The attach
, detach
, provision
, and delete
operations typically require the CSI driver to use credentials to the storage backend. Run the CSI controller pods on infrastructure nodes so the credentials are never leaked to user processes, even in the event of a catastrophic security breach on a compute node.
The external attacher must also run for CSI drivers that do not support third-party attach
or detach
operations. The external attacher will not issue any ControllerPublish
or ControllerUnpublish
operations to the CSI driver. However, it still must run to implement the necessary Red Hat OpenShift Service on AWS attachment API.
5.1.1.2. CSI driver daemon set
The CSI driver daemon set runs a pod on every node that allows Red Hat OpenShift Service on AWS to mount storage provided by the CSI driver to the node and use it in user workloads (pods) as persistent volumes (PVs). The pod with the CSI driver installed contains the following containers:
-
A CSI driver registrar, which registers the CSI driver into the
openshift-node
service running on the node. Theopenshift-node
process running on the node then directly connects with the CSI driver using the UNIX Domain Socket available on the node. - A CSI driver.
The CSI driver deployed on the node should have as few credentials to the storage back end as possible. Red Hat OpenShift Service on AWS will only use the node plugin set of CSI calls such as NodePublish
/NodeUnpublish
and NodeStage
/NodeUnstage
, if these calls are implemented.
5.1.2. CSI drivers supported by Red Hat OpenShift Service on AWS
Red Hat OpenShift Service on AWS installs certain CSI drivers by default, giving users storage options that are not possible with in-tree volume plugins.
To create CSI-provisioned persistent volumes that mount to these supported storage assets, Red Hat OpenShift Service on AWS installs the necessary CSI driver Operator, the CSI driver, and the required storage class by default. For more details about the default namespace of the Operator and driver, see the documentation for the specific CSI Driver Operator.
The following table describes the CSI drivers that are installed with Red Hat OpenShift Service on AWS and which CSI features they support, such as volume snapshots and resize.
In addition to the drivers listed in the following table, ROSA functions with CSI drivers from third-party storage vendors. Red Hat does not oversee third-party provisioners or the connected CSI drivers and the vendors fully control source code, deployment, operation, and Kubernetes compatibility. These volume provisioners are considered customer-managed and the respective vendors are responsible for providing support. See the Shared responsibilities for Red Hat OpenShift Service on AWS matrix for more information.
CSI driver | CSI volume snapshots | CSI cloning | CSI resize | Inline ephemeral volumes |
---|---|---|---|---|
AWS EBS |
✅ |
|
✅ |
|
AWS EFS |
|
|
|
|
LVM Storage |
✅ |
✅ |
✅ |
|
5.1.3. Dynamic provisioning
Dynamic provisioning of persistent storage depends on the capabilities of the CSI driver and underlying storage back end. The provider of the CSI driver should document how to create a storage class in Red Hat OpenShift Service on AWS and the parameters available for configuration.
The created storage class can be configured to enable dynamic provisioning.
Procedure
Create a default storage class that ensures all PVCs that do not require any special storage class are provisioned by the installed CSI driver.
# oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: <provisioner-name> 2 parameters: EOF
5.1.4. Example using the CSI driver
The following example installs a default MySQL template without any changes to the template.
Prerequisites
- The CSI driver has been deployed.
- A storage class has been created for dynamic provisioning.
Procedure
Create the MySQL template:
# oc new-app mysql-persistent
Example output
--> Deploying template "openshift/mysql-persistent" to project default ...
# oc get pvc
Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi RWO cinder 3s
5.2. Managing the default storage class
5.2.1. Overview
Managing the default storage class allows you to accomplish several different objectives:
- Enforcing static provisioning by disabling dynamic provisioning.
- When you have other preferred storage classes, preventing the storage operator from re-creating the initial default storage class.
- Renaming, or otherwise changing, the default storage class
To accomplish these objectives, you change the setting for the spec.storageClassState
field in the ClusterCSIDriver
object. The possible settings for this field are:
- Managed: (Default) The Container Storage Interface (CSI) operator is actively managing its default storage class, so that most manual changes made by a cluster administrator to the default storage class are removed, and the default storage class is continuously re-created if you attempt to manually delete it.
- Unmanaged: You can modify the default storage class. The CSI operator is not actively managing storage classes, so that it is not reconciling the default storage class it creates automatically.
- Removed: The CSI operators deletes the default storage class.
5.2.2. Managing the default storage class using the web console
Prerequisites
- Access to the Red Hat OpenShift Service on AWS web console.
- Access to the cluster with cluster-admin privileges.
Procedure
To manage the default storage class using the web console:
- Log in to the web console.
- Click Administration > CustomResourceDefinitions.
-
On the CustomResourceDefinitions page, type
clustercsidriver
to find theClusterCSIDriver
object. - Click ClusterCSIDriver, and then click the Instances tab.
- Click the name of the desired instance, and then click the YAML tab.
Add the
spec.storageClassState
field with a value ofManaged
,Unmanaged
, orRemoved
.Example
... spec: driverConfig: driverType: '' logLevel: Normal managementState: Managed observedConfig: null operatorLogLevel: Normal storageClassState: Unmanaged 1 ...
- 1
spec.storageClassState
field set to "Unmanaged"
- Click Save.
5.2.3. Managing the default storage class using the CLI
Prerequisites
- Access to the cluster with cluster-admin privileges.
Procedure
To manage the storage class using the CLI, run the following command:
oc patch clustercsidriver $DRIVERNAME --type=merge -p "{\"spec\":{\"storageClassState\":\"${STATE}\"}}" 1
- 1
- Where
${STATE}
is "Removed" or "Managed" or "Unmanaged".Where
$DRIVERNAME
is the provisioner name. You can find the provisioner name by running the commandoc get sc
.
5.2.4. Absent or multiple default storage classes
5.2.4.1. Multiple default storage classes
Multiple default storage classes can occur if you mark a non-default storage class as default and do not unset the existing default storage class, or you create a default storage class when a default storage class is already present. With multiple default storage classes present, any persistent volume claim (PVC) requesting the default storage class (pvc.spec.storageClassName
=nil) gets the most recently created default storage class, regardless of the default status of that storage class, and the administrator receives an alert in the alerts dashboard that there are multiple default storage classes, MultipleDefaultStorageClasses
.
5.2.4.2. Absent default storage class
There are two possible scenarios where PVCs can attempt to use a non-existent default storage class:
- An administrator removes the default storage class or marks it as non-default, and then a user creates a PVC requesting the default storage class.
- During installation, the installer creates a PVC requesting the default storage class, which has not yet been created.
In the preceding scenarios, PVCs remain in the pending state indefinitely. To resolve this situation, create a default storage class or declare one of the existing storage classes as the default. As soon as the default storage class is created or declared, the PVCs get the new default storage class. If possible, the PVCs eventually bind to statically or dynamically provisioned PVs as usual, and move out of the pending state.
5.2.5. Changing the default storage class
Use the following procedure to change the default storage class.
For example, if you have two defined storage classes, gp3
and standard
, and you want to change the default storage class from gp3
to standard
.
Prerequisites
- Access to the cluster with cluster-admin privileges.
Procedure
To change the default storage class:
List the storage classes:
$ oc get storageclass
Example output
NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs
- 1
(default)
indicates the default storage class.
Make the desired storage class the default.
For the desired storage class, set the
storageclass.kubernetes.io/is-default-class
annotation totrue
by running the following command:$ oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
NoteYou can have multiple default storage classes for a short time. However, you should ensure that only one default storage class exists eventually.
With multiple default storage classes present, any persistent volume claim (PVC) requesting the default storage class (
pvc.spec.storageClassName
=nil) gets the most recently created default storage class, regardless of the default status of that storage class, and the administrator receives an alert in the alerts dashboard that there are multiple default storage classes,MultipleDefaultStorageClasses
.Remove the default storage class setting from the old default storage class.
For the old default storage class, change the value of the
storageclass.kubernetes.io/is-default-class
annotation tofalse
by running the following command:$ oc patch storageclass gp3 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
Verify the changes:
$ oc get storageclass
Example output
NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs
5.3. AWS Elastic Block Store CSI Driver Operator
5.3.1. Overview
Red Hat OpenShift Service on AWS is capable of provisioning persistent volumes (PVs) using the AWS EBS CSI driver.
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver.
To create CSI-provisioned PVs that mount to AWS EBS storage assets, Red Hat OpenShift Service on AWS installs the AWS EBS CSI Driver Operator (a Red Hat operator) and the AWS EBS CSI driver by default in the openshift-cluster-csi-drivers
namespace.
- The AWS EBS CSI Driver Operator provides a StorageClass by default that you can use to create PVCs. You can disable this default storage class if desired (see Managing the default storage class). You also have the option to create the AWS EBS StorageClass as described in Persistent storage using Amazon Elastic Block Store.
- The AWS EBS CSI driver enables you to create and mount AWS EBS PVs.
5.3.2. About CSI
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give Red Hat OpenShift Service on AWS users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
Red Hat OpenShift Service on AWS defaults to using the CSI plugin to provision Amazon Elastic Block Store (Amazon EBS) storage.
For information about dynamically provisioning AWS EBS persistent volumes in Red Hat OpenShift Service on AWS, see Persistent storage using Amazon Elastic Block Store.
Additional resources
5.4. AWS Elastic File Service CSI Driver Operator
This procedure is specific to the AWS EFS CSI Driver Operator (a Red Hat Operator), which is only applicable for Red Hat OpenShift Service on AWS 4.10 and later versions.
5.4.1. Overview
Red Hat OpenShift Service on AWS is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AWS Elastic File Service (EFS).
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.
After installing the AWS EFS CSI Driver Operator, Red Hat OpenShift Service on AWS installs the AWS EFS CSI Operator and the AWS EFS CSI driver by default in the openshift-cluster-csi-drivers
namespace. This allows the AWS EFS CSI Driver Operator to create CSI-provisioned PVs that mount to AWS EFS assets.
-
The AWS EFS CSI Driver Operator, after being installed, does not create a storage class by default to use to create persistent volume claims (PVCs). However, you can manually create the AWS EFS
StorageClass
. The AWS EFS CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand. This eliminates the need for cluster administrators to pre-provision storage. - The AWS EFS CSI driver enables you to create and mount AWS EFS PVs.
AWS EFS only supports regional volumes, not zonal volumes.
5.4.2. About CSI
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give Red Hat OpenShift Service on AWS users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
5.4.3. Setting up the AWS EFS CSI Driver Operator
- If you are using AWS EFS with AWS Secure Token Service (STS), obtain a role Amazon Resource Name (ARN) for STS. This is required for installing the AWS EFS CSI Driver Operator.
- Install the AWS EFS CSI Driver Operator.
- Install the AWS EFS CSI Driver.
5.4.3.1. Obtaining a role Amazon Resource Name for Security Token Service
This procedure explains how to obtain a role Amazon Resource Name (ARN) to configure the AWS EFS CSI Driver Operator with Red Hat OpenShift Service on AWS on AWS Security Token Service (STS).
Perform this procedure before you install the AWS EFS CSI Driver Operator (see Installing the AWS EFS CSI Driver Operator procedure).
Prerequisites
- Access to the cluster as a user with the cluster-admin role.
- AWS account credentials
Procedure
Create an IAM policy JSON file with the following content:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "elasticfilesystem:DescribeAccessPoints", "elasticfilesystem:DescribeFileSystems", "elasticfilesystem:DescribeMountTargets", "ec2:DescribeAvailabilityZones", "elasticfilesystem:TagResource" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "elasticfilesystem:CreateAccessPoint" ], "Resource": "*", "Condition": { "StringLike": { "aws:RequestTag/efs.csi.aws.com/cluster": "true" } } }, { "Effect": "Allow", "Action": "elasticfilesystem:DeleteAccessPoint", "Resource": "*", "Condition": { "StringEquals": { "aws:ResourceTag/efs.csi.aws.com/cluster": "true" } } } ] }
Create an IAM trust JSON file with the following content:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::<your_aws_account_ID>:oidc-provider/<openshift_oidc_provider>" 1 }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "<openshift_oidc_provider>:sub": [ 2 "system:serviceaccount:openshift-cluster-csi-drivers:aws-efs-csi-driver-operator", "system:serviceaccount:openshift-cluster-csi-drivers:aws-efs-csi-driver-controller-sa" ] } } } ] }
- 1
- Specify your AWS account ID and the OpenShift OIDC provider endpoint.
Obtain your AWS account ID by running the following command:
$ aws sts get-caller-identity --query Account --output text
Obtain the OpenShift OIDC endpoint by running the following command:
$ rosa describe cluster \ -c $(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}') \ -o yaml | awk '/oidc_endpoint_url/ {print $2}' | cut -d '/' -f 3,4
- 2
- Specify the OpenShift OIDC endpoint again.
Create the IAM role:
ROLE_ARN=$(aws iam create-role \ --role-name "<your_cluster_name>-aws-efs-csi-operator" \ --assume-role-policy-document file://<your_trust_file_name>.json \ --query "Role.Arn" --output text); echo $ROLE_ARN
Copy the role ARN. You will need it when you install the AWS EFS CSI Driver Operator.
Create the IAM policy:
POLICY_ARN=$(aws iam create-policy \ --policy-name "<your_cluster_name>-aws-efs-csi" \ --policy-document file://<your_policy_file_name>.json \ --query 'Policy.Arn' --output text); echo $POLICY_ARN
Attach the IAM policy to the IAM role:
$ aws iam attach-role-policy \ --role-name "<your_cluster_name>-aws-efs-csi-operator" \ --policy-arn $POLICY_ARN
Next steps
Additional resources
5.4.3.2. Installing the AWS EFS CSI Driver Operator
The AWS EFS CSI Driver Operator (a Red Hat Operator) is not installed in Red Hat OpenShift Service on AWS by default. Use the following procedure to install and configure the AWS EFS CSI Driver Operator in your cluster.
Prerequisites
- Access to the Red Hat OpenShift Service on AWS web console.
Procedure
To install the AWS EFS CSI Driver Operator from the web console:
- Log in to the web console.
Install the AWS EFS CSI Operator:
-
Click Operators
OperatorHub. - Locate the AWS EFS CSI Operator by typing AWS EFS CSI in the filter box.
- Click the AWS EFS CSI Driver Operator button.
ImportantBe sure to select the AWS EFS CSI Driver Operator and not the AWS EFS Operator. The AWS EFS Operator is a community Operator and is not supported by Red Hat.
- On the AWS EFS CSI Driver Operator page, click Install.
On the Install Operator page, ensure that:
- If you are using AWS EFS with AWS Secure Token Service (STS), in the role ARN field, enter the ARN role copied from the last step of the Obtaining a role Amazon Resource Name for Security Token Service procedure.
- All namespaces on the cluster (default) is selected.
- Installed Namespace is set to openshift-cluster-csi-drivers.
Click Install.
After the installation finishes, the AWS EFS CSI Operator is listed in the Installed Operators section of the web console.
-
Click Operators
Next steps
5.4.3.3. Installing the AWS EFS CSI Driver
After installing the AWS EFS CSI Driver Operator (a Red Hat operator), you install the AWS EFS CSI driver.
Prerequisites
- Access to the Red Hat OpenShift Service on AWS web console.
Procedure
-
Click Administration
CustomResourceDefinitions ClusterCSIDriver. - On the Instances tab, click Create ClusterCSIDriver.
Use the following YAML file:
apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: efs.csi.aws.com spec: managementState: Managed
- Click Create.
Wait for the following Conditions to change to a "True" status:
- AWSEFSDriverNodeServiceControllerAvailable
- AWSEFSDriverControllerServiceControllerAvailable
5.4.4. Creating the AWS EFS storage class
Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.
The AWS EFS CSI Driver Operator (a Red Hat operator), after being installed, does not create a storage class by default. However, you can manually create the AWS EFS storage class.
5.4.4.1. Creating the AWS EFS storage class using the console
Procedure
-
In the Red Hat OpenShift Service on AWS console, click Storage
StorageClasses. - On the StorageClasses page, click Create StorageClass.
On the StorageClass page, perform the following steps:
- Enter a name to reference the storage class.
- Optional: Enter the description.
- Select the reclaim policy.
-
Select
efs.csi.aws.com
from the Provisioner drop-down list. - Optional: Set the configuration parameters for the selected provisioner.
- Click Create.
5.4.4.2. Creating the AWS EFS storage class using the CLI
Procedure
Create a
StorageClass
object:kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap 1 fileSystemId: fs-a5324911 2 directoryPerms: "700" 3 gidRangeStart: "1000" 4 gidRangeEnd: "2000" 5 basePath: "/dynamic_provisioning" 6
- 1
provisioningMode
must beefs-ap
to enable dynamic provisioning.- 2
fileSystemId
must be the ID of the EFS volume created manually.- 3
directoryPerms
is the default permission of the root directory of the volume. In this example, the volume is accessible only by the owner.- 4 5
gidRangeStart
andgidRangeEnd
set the range of POSIX Group IDs (GIDs) that are used to set the GID of the AWS access point. If not specified, the default range is 50000-7000000. Each provisioned volume, and thus AWS access point, is assigned a unique GID from this range.- 6
basePath
is the directory on the EFS volume that is used to create dynamically provisioned volumes. In this case, a PV is provisioned as “/dynamic_provisioning/<random uuid>” on the EFS volume. Only the subdirectory is mounted to pods that use the PV.
NoteA cluster admin can create several
StorageClass
objects, each using a different EFS volume.
5.4.5. Creating and configuring access to EFS volumes in AWS
This procedure explains how to create and configure EFS volumes in AWS so that you can use them in Red Hat OpenShift Service on AWS.
Prerequisites
- AWS account credentials
Procedure
To create and configure access to an EFS volume in AWS:
- On the AWS console, open https://console.aws.amazon.com/efs.
Click Create file system:
- Enter a name for the file system.
- For Virtual Private Cloud (VPC), select your Red Hat OpenShift Service on AWS’s' virtual private cloud (VPC).
- Accept default settings for all other selections.
Wait for the volume and mount targets to finish being fully created:
- Go to https://console.aws.amazon.com/efs#/file-systems.
- Click your volume, and on the Network tab wait for all mount targets to become available (~1-2 minutes).
- On the Network tab, copy the Security Group ID (you will need this in the next step).
- Go to https://console.aws.amazon.com/ec2/v2/home#SecurityGroups, and find the Security Group used by the EFS volume.
On the Inbound rules tab, click Edit inbound rules, and then add a new rule with the following settings to allow Red Hat OpenShift Service on AWS nodes to access EFS volumes :
- Type: NFS
- Protocol: TCP
- Port range: 2049
Source: Custom/IP address range of your nodes (for example: “10.0.0.0/16”)
This step allows Red Hat OpenShift Service on AWS to use NFS ports from the cluster.
- Save the rule.
5.4.6. Dynamic provisioning for Amazon Elastic File Storage
The AWS EFS CSI driver supports a different form of dynamic provisioning than other CSI drivers. It provisions new PVs as subdirectories of a pre-existing EFS volume. The PVs are independent of each other. However, they all share the same EFS volume. When the volume is deleted, all PVs provisioned out of it are deleted too. The EFS CSI driver creates an AWS Access Point for each such subdirectory. Due to AWS AccessPoint limits, you can only dynamically provision 1000 PVs from a single StorageClass
/EFS volume.
Note that PVC.spec.resources
is not enforced by EFS.
In the example below, you request 5 GiB of space. However, the created PV is limitless and can store any amount of data (like petabytes). A broken application, or even a rogue application, can cause significant expenses when it stores too much data on the volume.
Using monitoring of EFS volume sizes in AWS is strongly recommended.
Prerequisites
- You have created Amazon Elastic File Storage (Amazon EFS) volumes.
- You have created the AWS EFS storage class.
Procedure
To enable dynamic provisioning:
Create a PVC (or StatefulSet or Template) as usual, referring to the
StorageClass
created previously.apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test spec: storageClassName: efs-sc accessModes: - ReadWriteMany resources: requests: storage: 5Gi
If you have problems setting up dynamic provisioning, see AWS EFS troubleshooting.
Additional resources
5.4.7. Creating static PVs with Amazon Elastic File Storage
It is possible to use an Amazon Elastic File Storage (Amazon EFS) volume as a single PV without any dynamic provisioning. The whole volume is mounted to pods.
Prerequisites
- You have created Amazon EFS volumes.
Procedure
Create the PV using the following YAML file:
apiVersion: v1 kind: PersistentVolume metadata: name: efs-pv spec: capacity: 1 storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany - ReadWriteOnce persistentVolumeReclaimPolicy: Retain csi: driver: efs.csi.aws.com volumeHandle: fs-ae66151a 2 volumeAttributes: encryptInTransit: "false" 3
- 1
spec.capacity
does not have any meaning and is ignored by the CSI driver. It is used only when binding to a PVC. Applications can store any amount of data to the volume.- 2
volumeHandle
must be the same ID as the EFS volume you created in AWS. If you are providing your own access point,volumeHandle
should be<EFS volume ID>::<access point ID>
. For example:fs-6e633ada::fsap-081a1d293f0004630
.- 3
- If desired, you can disable encryption in transit. Encryption is enabled by default.
If you have problems setting up static PVs, see AWS EFS troubleshooting.
5.4.8. Amazon Elastic File Storage security
The following information is important for Amazon Elastic File Storage (Amazon EFS) security.
When using access points, for example, by using dynamic provisioning as described earlier, Amazon automatically replaces GIDs on files with the GID of the access point. In addition, EFS considers the user ID, group ID, and secondary group IDs of the access point when evaluating file system permissions. EFS ignores the NFS client’s IDs. For more information about access points, see https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html.
As a consequence, EFS volumes silently ignore FSGroup; Red Hat OpenShift Service on AWS is not able to replace the GIDs of files on the volume with FSGroup. Any pod that can access a mounted EFS access point can access any file on it.
Unrelated to this, encryption in transit is enabled by default. For more information, see https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html.
5.4.9. AWS EFS storage CSI usage metrics
5.4.9.1. Usage metrics overview
Amazon Web Services (AWS) Elastic File Service (EFS) storage Container Storage Interface (CSI) usage metrics allow you to monitor how much space is used by either dynamically or statically provisioned EFS volumes.
This features is disabled by default, because turning on metrics can lead to performance degradation.
The AWS EFS usage metrics feature collects volume metrics in the AWS EFS CSI Driver by recursively walking through the files in the volume. Because this effort can degrade performance, administrators must explicitly enable this feature.
5.4.9.2. Enabling usage metrics using the web console
To enable Amazon Web Services (AWS) Elastic File Service (EFS) Storage Container Storage Interface (CSI) usage metrics using the web console:
- Click Administration > CustomResourceDefinitions.
-
On the CustomResourceDefinitions page next to the Name dropdown box, type
clustercsidriver
. - Click CRD ClusterCSIDriver.
- Click the YAML tab.
Under
spec.aws.efsVolumeMetrics.state
, set the value toRecursiveWalk
.RecursiveWalk
indicates that volume metrics collection in the AWS EFS CSI Driver is performed by recursively walking through the files in the volume.Example ClusterCSIDriver efs.csi.aws.com YAML file
spec: driverConfig: driverType: AWS aws: efsVolumeMetrics: state: RecursiveWalk recursiveWalk: refreshPeriodMinutes: 100 fsRateLimit: 10
Optional: To define how the recursive walk operates, you can also set the following fields:
-
refreshPeriodMinutes
: Specifies the refresh frequency for volume metrics in minutes. If this field is left blank, a reasonable default is chosen, which is subject to change over time. The current default is 240 minutes. The valid range is 1 to 43,200 minutes. -
fsRateLimit
: Defines the rate limit for processing volume metrics in goroutines per file system. If this field is left blank, a reasonable default is chosen, which is subject to change over time. The current default is 5 goroutines. The valid range is 1 to 100 goroutines.
-
- Click Save.
To disable AWS EFS CSI usage metrics, use the preceding procedure, but for spec.aws.efsVolumeMetrics.state
, change the value from RecursiveWalk
to Disabled
.
5.4.9.3. Enabling usage metrics using the CLI
To enable Amazon Web Services (AWS) Elastic File Service (EFS) storage Container Storage Interface (CSI) usage metrics using the CLI:
Edit ClusterCSIDriver by running the following command:
$ oc edit clustercsidriver efs.csi.aws.com
Under
spec.aws.efsVolumeMetrics.state
, set the value toRecursiveWalk
.RecursiveWalk
indicates that volume metrics collection in the AWS EFS CSI Driver is performed by recursively walking through the files in the volume.Example ClusterCSIDriver efs.csi.aws.com YAML file
spec: driverConfig: driverType: AWS aws: efsVolumeMetrics: state: RecursiveWalk recursiveWalk: refreshPeriodMinutes: 100 fsRateLimit: 10
Optional: To define how the recursive walk operates, you can also set the following fields:
-
refreshPeriodMinutes
: Specifies the refresh frequency for volume metrics in minutes. If this field is left blank, a reasonable default is chosen, which is subject to change over time. The current default is 240 minutes. The valid range is 1 to 43,200 minutes. -
fsRateLimit
: Defines the rate limit for processing volume metrics in goroutines per file system. If this field is left blank, a reasonable default is chosen, which is subject to change over time. The current default is 5 goroutines. The valid range is 1 to 100 goroutines.
-
-
Save the changes to the
efs.csi.aws.com
object.
To disable AWS EFS CSI usage metrics, use the preceding procedure, but for spec.aws.efsVolumeMetrics.state
, change the value from RecursiveWalk
to Disabled
.
5.4.10. Amazon Elastic File Storage troubleshooting
The following information provides guidance on how to troubleshoot issues with Amazon Elastic File Storage (Amazon EFS):
-
The AWS EFS Operator and CSI driver run in namespace
openshift-cluster-csi-drivers
. To initiate gathering of logs of the AWS EFS Operator and CSI driver, run the following command:
$ oc adm must-gather [must-gather ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 [must-gather ] OUT namespace/openshift-must-gather-xm4wq created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-2bd8x created [must-gather ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 created
To show AWS EFS Operator errors, view the
ClusterCSIDriver
status:$ oc get clustercsidriver efs.csi.aws.com -o yaml
If a volume cannot be mounted to a pod (as shown in the output of the following command):
$ oc describe pod ... Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m13s default-scheduler Successfully assigned default/efs-app to ip-10-0-135-94.ec2.internal Warning FailedMount 13s kubelet MountVolume.SetUp failed for volume "pvc-d7c097e6-67ec-4fae-b968-7e7056796449" : rpc error: code = DeadlineExceeded desc = context deadline exceeded 1 Warning FailedMount 10s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-9j477]: timed out waiting for the condition
- 1
- Warning message indicating volume not mounted.
This error is frequently caused by AWS dropping packets between an Red Hat OpenShift Service on AWS node and Amazon EFS.
Check that the following are correct:
- AWS firewall and Security Groups
- Networking: port number and IP addresses
5.4.11. Uninstalling the AWS EFS CSI Driver Operator
All EFS PVs are inaccessible after uninstalling the AWS EFS CSI Driver Operator (a Red Hat operator).
Prerequisites
- Access to the Red Hat OpenShift Service on AWS web console.
Procedure
To uninstall the AWS EFS CSI Driver Operator from the web console:
- Log in to the web console.
- Stop all applications that use AWS EFS PVs.
Delete all AWS EFS PVs:
-
Click Storage
PersistentVolumeClaims. - Select each PVC that is in use by the AWS EFS CSI Driver Operator, click the drop-down menu on the far right of the PVC, and then click Delete PersistentVolumeClaims.
-
Click Storage
Uninstall the AWS EFS CSI driver:
NoteBefore you can uninstall the Operator, you must remove the CSI driver first.
-
Click Administration
CustomResourceDefinitions ClusterCSIDriver. - On the Instances tab, for efs.csi.aws.com, on the far left side, click the drop-down menu, and then click Delete ClusterCSIDriver.
- When prompted, click Delete.
-
Click Administration
Uninstall the AWS EFS CSI Operator:
-
Click Operators
Installed Operators. - On the Installed Operators page, scroll or type AWS EFS CSI into the Search by name box to find the Operator, and then click it.
-
On the upper, right of the Installed Operators > Operator details page, click Actions
Uninstall Operator. When prompted on the Uninstall Operator window, click the Uninstall button to remove the Operator from the namespace. Any applications deployed by the Operator on the cluster need to be cleaned up manually.
After uninstalling, the AWS EFS CSI Driver Operator is no longer listed in the Installed Operators section of the web console.
-
Click Operators
Before you can destroy a cluster (openshift-install destroy cluster
), you must delete the EFS volume in AWS. An Red Hat OpenShift Service on AWS cluster cannot be destroyed when there is an EFS volume that uses the cluster’s VPC. Amazon does not allow deletion of such a VPC.