Chapter 21. Configuring Persistent Storage
21.1. Overview
The Kubernetes persistent volume framework allows you to provision an OpenShift Container Platform cluster with persistent storage using networked storage available in your environment. This can be done after completing the initial OpenShift Container Platform installation depending on your application needs, giving users a way to request those resources without having any knowledge of the underlying infrastructure.
These topics show how to configure persistent volumes in OpenShift Container Platform using the following supported volume plug-ins:
21.2. Persistent Storage Using NFS
21.2.1. Overview
OpenShift Container Platform clusters can be provisioned with persistent storage using NFS. Persistent volumes (PVs) and persistent volume claims (PVCs) provide a convenient method for sharing a volume across a project. While the NFS-specific information contained in a PV definition could also be defined directly in a pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts.
This topic covers the specifics of using the NFS persistent storage type. Some familiarity with OpenShift Container Platform and NFS is beneficial. See the Persistent Storage concept topic for details on the OpenShift Container Platform persistent volume (PV) framework in general.
21.2.2. Provisioning
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. To provision NFS volumes, a list of NFS servers and export paths are all that is required.
You must first create an object definition for the PV:
Example 21.1. PV Object Definition Using NFS
apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Recycle 7
- 1
- The name of the volume. This is the PV identity in various
oc <command> pod
commands. - 2
- The amount of storage allocated to this volume.
- 3
- Though this appears to be related to controlling access to the volume, it is actually used similarly to labels and used to match a PVC to a PV. Currently, no access rules are enforced based on the
accessModes
. - 4
- The volume type being used, in this case the nfs plug-in.
- 5
- The path that is exported by the NFS server.
- 6
- The host name or IP address of the NFS server.
- 7
- The reclaim policy for the PV. This defines what happens to a volume when released from its claim. Valid options are Retain (default) and Recycle. See Reclaiming Resources.
Each NFS volume must be mountable by all schedulable nodes in the cluster.
Save the definition to a file, for example nfs-pv.yaml, and create the PV:
$ oc create -f nfs-pv.yaml persistentvolume "pv0001" created
Verify that the PV was created:
# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5368709120 RWO Available 31s
The next step can be to create a PVC, which binds to the new PV:
Example 21.2. PVC Object Definition
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 1Gi 2
Save the definition to a file, for example nfs-claim.yaml, and create the PVC:
# oc create -f nfs-claim.yaml
21.2.3. Enforcing Disk Quotas
You can use disk partitions to enforce disk quotas and size constraints. Each partition can be its own export. Each export is one PV. OpenShift Container Platform enforces unique names for PVs, but the uniqueness of the NFS volume’s server and path is up to the administrator.
Enforcing quotas in this way allows the developer to request persistent storage by a specific amount (for example, 10Gi) and be matched with a corresponding volume of equal or greater capacity.
21.2.4. NFS Volume Security
This section covers NFS volume security, including matching permissions and SELinux considerations. The user is expected to understand the basics of POSIX permissions, process UIDs, supplemental groups, and SELinux.
See the full Volume Security topic before implementing NFS volumes.
Developers request NFS storage by referencing, in the volumes
section of their pod definition, either a PVC by name or the NFS volume plug-in directly.
The /etc/exports file on the NFS server contains the accessible NFS directories. The target NFS directory has POSIX owner and group IDs. The OpenShift Container Platform NFS plug-in mounts the container’s NFS directory with the same POSIX ownership and permissions found on the exported NFS directory. However, the container is not run with its effective UID equal to the owner of the NFS mount, which is the desired behavior.
As an example, if the target NFS directory appears on the NFS server as:
# ls -lZ /opt/nfs -d drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs # id nfsnobody uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)
Then the container must match SELinux labels, and either run with a UID of 65534 (nfsnobody owner) or with 5555 in its supplemental groups in order to access the directory.
The owner ID of 65534 is used as an example. Even though NFS’s root_squash maps root (0) to nfsnobody (65534), NFS exports can have arbitrary owner IDs. Owner 65534 is not required for NFS exports.
21.2.4.1. Group IDs
The recommended way to handle NFS access (assuming it is not an option to change permissions on the NFS export) is to use supplemental groups. Supplemental groups in OpenShift Container Platform are used for shared storage, of which NFS is an example. In contrast, block storage, such as Ceph RBD or iSCSI, use the fsGroup SCC strategy and the fsGroup value in the pod’s securityContext
.
It is generally preferable to use supplemental group IDs to gain access to persistent storage versus using user IDs. Supplemental groups are covered further in the full Volume Security topic.
Because the group ID on the example target NFS directory shown above is 5555, the pod can define that group ID using supplementalGroups
under the pod-level securityContext
definition. For example:
spec: containers: - name: ... securityContext: 1 supplementalGroups: [5555] 2
Assuming there are no custom SCCs that might satisfy the pod’s requirements, the pod likely matches the restricted SCC. This SCC has the supplementalGroups
strategy set to RunAsAny, meaning that any supplied group ID is accepted without range checking.
As a result, the above pod passes admissions and is launched. However, if group ID range checking is desired, a custom SCC, as described in pod security and custom SCCs, is the preferred solution. A custom SCC can be created such that minimum and maximum group IDs are defined, group ID range checking is enforced, and a group ID of 5555 is allowed.
To use a custom SCC, you must first add it to the appropriate service account. For example, use the default
service account in the given project unless another has been specified on the pod specification. See Add an SCC to a User, Group, or Project for details.
21.2.4.2. User IDs
User IDs can be defined in the container image or in the pod definition. The full Volume Security topic covers controlling storage access based on user IDs, and should be read prior to setting up NFS persistent storage.
It is generally preferable to use supplemental group IDs to gain access to persistent storage versus using user IDs.
In the example target NFS directory shown above, the container needs its UID set to 65534 (ignoring group IDs for the moment), so the following can be added to the pod definition:
spec: containers: 1 - name: ... securityContext: runAsUser: 65534 2
Assuming the default project and the restricted SCC, the pod’s requested user ID of 65534 is not allowed, and therefore the pod fails. The pod fails for the following reasons:
- It requests 65534 as its user ID.
- All SCCs available to the pod are examined to see which SCC allows a user ID of 65534 (actually, all policies of the SCCs are checked but the focus here is on user ID).
-
Because all available SCCs use MustRunAsRange for their
runAsUser
strategy, UID range checking is required. - 65534 is not included in the SCC or project’s user ID range.
It is generally considered a good practice not to modify the predefined SCCs. The preferred way to fix this situation is to create a custom SCC, as described in the full Volume Security topic. A custom SCC can be created such that minimum and maximum user IDs are defined, UID range checking is still enforced, and the UID of 65534 is allowed.
To use a custom SCC, you must first add it to the appropriate service account. For example, use the default
service account in the given project unless another has been specified on the pod specification. See Add an SCC to a User, Group, or Project for details.
21.2.4.3. SELinux
See the full Volume Security topic for information on controlling storage access in conjunction with using SELinux.
By default, SELinux does not allow writing from a pod to a remote NFS server. The NFS volume mounts correctly, but is read-only.
To enable writing to NFS volumes with SELinux enforcing on each node, run:
# setsebool -P virt_use_nfs 1
The -P
option above makes the bool persistent between reboots.
The virt_use_nfs boolean is defined by the docker-selinux package. If an error is seen indicating that this bool is not defined, ensure this package has been installed.
21.2.4.4. Export Settings
In order to enable arbitrary container users to read and write the volume, each exported volume on the NFS server should conform to the following conditions:
Each export must be:
/<example_fs> *(rw,root_squash)
The firewall must be configured to allow traffic to the mount point.
For NFSv4, configure the default port
2049
(nfs) and port111
(portmapper).NFSv4
# iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT # iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT
For NFSv3, there are three ports to configure:
2049
(nfs),20048
(mountd), and111
(portmapper).NFSv3
# iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT # iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT # iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT
-
The NFS export and directory must be set up so that it is accessible by the target pods. Either set the export to be owned by the container’s primary UID, or supply the pod group access using
supplementalGroups
, as shown in Group IDs above. See the full Volume Security topic for additional pod security information as well.
21.2.5. Reclaiming Resources
NFS implements the OpenShift Container Platform Recyclable plug-in interface. Automatic processes handle reclamation tasks based on policies set on each persistent volume.
By default, PVs are set to Retain. NFS volumes which are set to Recycle are scrubbed (i.e., rm -rf
is run on the volume) after being released from their claim (i.e, after the user’s PersistentVolumeClaim
bound to the volume is deleted). Once recycled, the NFS volume can be bound to a new claim.
Once claim to a PV is released (that is, the PVC is deleted), the PV object should not be re-used. Instead, a new PV should be created with the same basic volume details as the original.
For example, the administrator creates a PV named nfs1
:
apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: "/"
The user creates PVC1
, which binds to nfs1
. The user then deletes PVC1
, releasing claim to nfs1
, which causes nfs1
to be Released
. If the administrator wishes to make the same NFS share available, they should create a new PV with the same NFS server details, but a different PV name:
apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: "/"
Deleting the original PV and re-creating it with the same name is discouraged. Attempting to manually change the status of a PV from Released
to Available
causes errors and potential data loss.
A PV with retention policy of Recycle
scrubs (rm -rf
) the data and marks it as Available
for claim. The Recycle
retention policy is deprecated starting in OpenShift Container Platform 3.6 and should be avoided. Anyone using recycler should use dynamic provision and volume deletion instead.
21.2.6. Automation
Clusters can be provisioned with persistent storage using NFS in the following ways:
- Enforce storage quotas using disk partitions.
- Enforce security by restricting volumes to the project that has a claim to them.
- Configure reclamation of discarded resources for each PV.
They are many ways that you can use scripts to automate the above tasks. You can use an example Ansible playbook to help you get started.
21.2.7. Additional Configuration and Troubleshooting
Depending on what version of NFS is being used and how it is configured, there may be additional configuration steps needed for proper export and security mapping. The following are some that may apply:
NFSv4 mount incorrectly shows all files with ownership of nobody:nobody |
|
Disabling ID mapping on NFSv4 |
|
21.3. Persistent Storage Using GlusterFS
21.3.1. Overview
You can configure your OpenShift Container Platform cluster to use Red Hat Gluster Storage as persistent storage for containerized applications. There are two deployment solutions available when using Red Hat Gluster Storage, using either a containerized or dedicated storage cluster. This topic focuses mainly on the the persistent volume plug-in solution using a dedicated Red Hat Gluster Storage cluster.
21.3.1.1. Containerized Red Hat Gluster Storage
Starting with the Red Hat Gluster Storage 3.1 update 3 release, you can deploy containerized Red Hat Gluster Storage directly on OpenShift Container Platform. Containerized Red Hat Gluster Storage converged with OpenShift Container Platform addresses the use case where containerized applications require both shared file storage and the flexibility of a converged infrastructure with compute and storage instances being scheduled and run from the same set of hardware.
Figure 21.1. Architecture - Red Hat Gluster Storage Container Converged with OpenShift
Step-by-step instructions for this containerized solution are provided separately in the following Red Hat Gluster Storage documentation:
21.3.1.2. Dedicated Storage Cluster
If you have a dedicated Red Hat Gluster Storage cluster available in your environment, you can configure OpenShift Container Platform’s Gluster volume plug-in. The dedicated storage cluster delivers persistent Red Hat Gluster Storage file storage for containerized applications over the network. The applications access storage served out from the storage clusters through common storage protocols.
Figure 21.2. Architecture - Dedicated Red Hat Gluster Storage Cluster Using the OpenShift Container Platform Volume Plug-in
You can also dynamically provision volumes in a dedicated Red Hat Gluster Storage cluster that are enabled by Heketi. See Managing Volumes Using Heketi in the Red Hat Gluster Storage Administration Guide for more information.
This solution is a conventional deployment where containerized compute applications run on an OpenShift Container Platform cluster. The remaining sections in this topic provide the step-by-step instructions for the dedicated Red Hat Gluster Storage solution.
This topic presumes some familiarity with OpenShift Container Platform and GlusterFS:
- See the Persistent Storage topic for details on the OpenShift Container Platform PV framework in general.
- See the Red Hat Gluster Storage 3 Administration Guide for more on GlusterFS.
High-availability of storage in the infrastructure is left to the underlying storage provider.
21.3.2. Support Requirements
The following requirements must be met to create a supported integration of Red Hat Gluster Storage and OpenShift Container Platform.
21.3.2.1. Supported Operating Systems
The following table lists the supported versions of OpenShift Container Platform with Red Hat Gluster Storage Server.
Red Hat Gluster Storage | OpenShift Container Platform |
---|---|
3.1.3 | 3.1 or later |
21.3.2.2. Environment Requirements
The environment requirements for OpenShift Container Platform and Red Hat Gluster Storage are described in this section.
Red Hat Gluster Storage
- All installations of Red Hat Gluster Storage must have valid subscriptions to Red Hat Network channels and Subscription Management repositories.
- Red Hat Gluster Storage installations must adhere to the requirements laid out in the Red Hat Gluster Storage Installation Guide.
- Red Hat Gluster Storage installations must be completely up to date with the latest patches and upgrades. Refer to the Red Hat Gluster Storage 3.1 Installation Guide to upgrade to the latest version.
- The versions of OpenShift Container Platform and Red Hat Gluster Storage integrated must be compatible, according to the information in Supported Operating Systems.
- A fully-qualified domain name (FQDN) must be set for each hypervisor and Red Hat Gluster Storage server node. Ensure that correct DNS records exist, and that the FQDN is resolvable via both forward and reverse DNS lookup.
Red Hat OpenShift Container Platform
- All installations of OpenShift Container Platform must have valid subscriptions to Red Hat Network channels and Subscription Management repositories.
- OpenShift Container Platform installations must adhere to the requirements laid out in the Installation and Configuration documentation.
- The OpenShift Container Platform cluster must be up and running.
- A user with cluster-admin permissions must be created.
- All OpenShift Container Platform nodes on RHEL systems must have the glusterfs-fuse RPM installed, which should match the version of Red Hat Gluster Storage server running in the containers. For more information on installing glusterfs-fuse, see Native Client in the Red Hat Gluster Storage Administration Guide.
21.3.3. Provisioning
To provision GlusterFS volumes using the dedicated storage cluster solution, the following are required:
- An existing storage device in your underlying infrastructure.
- A distinct list of servers (IP addresses) in the Gluster cluster, to be defined as endpoints.
- A service, to persist the endpoints (optional).
- An existing Gluster volume to be referenced in the persistent volume object.
glusterfs-fuse installed on each schedulable OpenShift Container Platform node in your cluster:
$ yum install glusterfs-fuse
Persistent volumes (PVs) and persistent volume claims (PVCs) can share volumes across a single project. While the GlusterFS-specific information contained in a PV definition could also be defined directly in a pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts.
21.3.3.1. Creating Gluster Endpoints
An endpoints definition defines the GlusterFS cluster as EndPoints
and includes the IP addresses of your Gluster servers. The port value can be any numeric value within the accepted range of ports. Optionally, you can create a service that persists the endpoints.
Define the following service:
apiVersion: v1 kind: Service metadata: name: glusterfs-cluster 1 spec: ports: - port: 1
- 1
- This name must be defined in the endpoints definition. If using a service, then the endpoints name must match the service name.
Save the service definition to a file, for example gluster-service.yaml, then create the service:
$ oc create -f gluster-service.yaml
Verify that the service was created:
$ oc get services NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE glusterfs-cluster 172.30.205.34 <none> 1/TCP <none> 44s
Define the Gluster endpoints:
apiVersion: v1 kind: Endpoints metadata: name: glusterfs-cluster 1 subsets: - addresses: - ip: 192.168.122.221 2 ports: - port: 1 - addresses: - ip: 192.168.122.222 3 ports: - port: 1 4
Save the endpoints definition to a file, for example gluster-endpoints.yaml, then create the endpoints:
$ oc create -f gluster-endpoints.yaml endpoints "glusterfs-cluster" created
Verify that the endpoints were created:
$ oc get endpoints NAME ENDPOINTS AGE docker-registry 10.1.0.3:5000 4h glusterfs-cluster 192.168.122.221:1,192.168.122.222:1 11s kubernetes 172.16.35.3:8443 4d
21.3.3.2. Creating the Persistent Volume
GlusterFS does not support the 'Recycle' reclaim policy.
Next, define the PV in an object definition before creating it in OpenShift Container Platform:
apiVersion: v1 kind: PersistentVolume metadata: name: gluster-default-volume 1 spec: capacity: storage: 2Gi 2 accessModes: 3 - ReadWriteMany glusterfs: 4 endpoints: glusterfs-cluster 5 path: myVol1 6 readOnly: false persistentVolumeReclaimPolicy: Retain 7
- 1
- The name of the volume. This is how it is identified via persistent volume claims or from pods.
- 2
- The amount of storage allocated to this volume.
- 3
accessModes
are used as labels to match a PV and a PVC. They currently do not define any form of access control.- 4
- The volume type being used, in this case the glusterfs plug-in.
- 5
- The endpoints name that defines the Gluster cluster created in Creating Gluster Endpoints.
- 6
- The Gluster volume that will be accessed, as shown in the
gluster volume status
command. - 7
- The volume reclaim policy
Retain
indicates that the volume will be preserved after the pods accessing it terminates. For GlusterFS, the accepted values includeRetain
, andDelete
.
Endpoints are name-spaced. Each project accessing the Gluster volume needs its own endpoints.
Save the definition to a file, for example gluster-pv.yaml, and create the persistent volume:
$ oc create -f gluster-pv.yaml
Verify that the persistent volume was created:
$ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE gluster-default-volume <none> 2147483648 RWX Available 2s
21.3.3.3. Creating the Persistent Volume Claim
Developers request GlusterFS storage by referencing either a PVC or the Gluster volume plug-in directly in the volumes
section of a pod spec. A PVC exists only in the user’s project and can only be referenced by pods within that project. Any attempt to access a PV across a project causes the pod to fail.
Create a PVC that will bind to the new PV:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gluster-claim spec: accessModes: - ReadWriteMany 1 resources: requests: storage: 1Gi 2
Save the definition to a file, for example gluster-claim.yaml, and create the PVC:
$ oc create -f gluster-claim.yaml
NotePVs and PVCs make sharing a volume across a project simpler. The gluster-specific information contained in the PV definition can also be defined directly in a pod specification.
21.3.4. Gluster Volume Security
This section covers Gluster volume security, including matching permissions and SELinux considerations. Understanding the basics of POSIX permissions, process UIDs, supplemental groups, and SELinux is presumed.
See the full Volume Security topic before implementing Gluster volumes.
As an example, assume that the target Gluster volume, HadoopVol
is mounted under /mnt/glusterfs/, with the following POSIX permissions and SELinux labels:
$ ls -lZ /mnt/glusterfs/ drwxrwx---. yarn hadoop system_u:object_r:fusefs_t:s0 HadoopVol $ id yarn uid=592(yarn) gid=590(hadoop) groups=590(hadoop)
In order to access the HadoopVol
volume, containers must match the SELinux label, and run with a UID of 592 or 590 in their supplemental groups. The OpenShift Container Platform GlusterFS plug-in mounts the volume in the container with the same POSIX ownership and permissions found on the target gluster mount, namely the owner will be 592 and group ID will be 590. However, the container is not run with its effective UID equal to 592, nor with its GID equal to 590, which is the desired behavior. Instead, a container’s UID and supplemental groups are determined by Security Context Constraints (SCCs) and the project defaults.
21.3.4.1. Group IDs
Configure Gluster volume access by using supplemental groups, assuming it is not an option to change permissions on the Gluster mount. Supplemental groups in OpenShift Container Platform are used for shared storage, such as GlusterFS. In contrast, block storage, such as Ceph RBD or iSCSI, use the fsGroup SCC strategy and the fsGroup value in the pod’s securityContext
.
Use supplemental group IDs instead of user IDs to gain access to persistent storage. Supplemental groups are covered further in the full Volume Security topic.
The group ID on the target Gluster mount example above is 590. Therefore, a pod can define that group ID using supplementalGroups
under the pod-level securityContext
definition. For example:
spec: containers: - name: ... securityContext: 1 supplementalGroups: [590] 2
Assuming there are no custom SCCs that satisfy the pod’s requirements, the pod matches the restricted SCC. This SCC has the supplementalGroups
strategy set to RunAsAny, meaning that any supplied group IDs are accepted without range checking.
As a result, the above pod will pass admissions and can be launched. However, if group ID range checking is desired, use a custom SCC, as described in pod security and custom SCCs. A custom SCC can be created to define minimum and maximum group IDs, enforce group ID range checking, and allow a group ID of 590.
21.3.4.2. User IDs
User IDs can be defined in the container image or in the pod definition. The full Volume Security topic covers controlling storage access based on user IDs, and should be read prior to setting up NFS persistent storage.
Use supplemental group IDs instead of user IDs to gain access to persistent storage.
In the target Gluster mount example above, the container needs a UID set to 592, so the following can be added to the pod definition:
spec: containers: 1 - name: ... securityContext: runAsUser: 592 2
With the default project and the restricted SCC, a pod’s requested user ID of 592 will not be allowed, and the pod will fail. This is because:
- The pod requests 592 as its user ID.
- All SCCs available to the pod are examined to see which SCC will allow a user ID of 592.
-
Because all available SCCs use MustRunAsRange for their
runAsUser
strategy, UID range checking is required. - 592 is not included in the SCC or project’s user ID range.
Do not modify the predefined SCCs. Insead, create a custom SCC so that minimum and maximum user IDs are defined, UID range checking is still enforced, and the UID of 592 will be allowed.
21.3.4.3. SELinux
See the full Volume Security topic for information on controlling storage access in conjunction with using SELinux.
By default, SELinux does not allow writing from a pod to a remote Gluster server.
To enable writing to GlusterFS volumes with SELinux enforcing on each node, run:
$ sudo setsebool -P virt_sandbox_use_fusefs on
The virt_sandbox_use_fusefs
boolean is defined by the docker-selinux package. If you get an error saying it is not defined, please ensure that this package is installed.
The -P
option makes the bool persistent between reboots.
21.4. Persistent Storage Using OpenStack Cinder
21.4.1. Overview
You can provision your OpenShift Container Platform cluster with persistent storage using OpenStack Cinder. Some familiarity with Kubernetes and OpenStack is assumed.
Before creating persistent volumes using Cinder, OpenShift Container Platform must first be properly configured for OpenStack.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. OpenStack Cinder volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims, however, are specific to a project or namespace and can be requested by users.
For a detailed example, see the guide for WordPress and MySQL using persistent volumes.
High-availability of storage in the infrastructure is left to the underlying storage provider.
21.4.2. Provisioning
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. After ensuring OpenShift Container Platform is configured for OpenStack, all that is required for Cinder is a Cinder volume ID and the PersistentVolume
API.
21.4.2.1. Creating the Persistent Volume
Cinder does not support the 'Recycle' reclaim policy.
You must define your persistent volume in an object definition before creating it in OpenShift Container Platform:
Example 21.3. Persistent Volume Object Definition Using Cinder
apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5Gi" 2 accessModes: - "ReadWriteOnce" cinder: 3 fsType: "ext3" 4 volumeID: "f37a03aa-6212-4c62-a805-9ce139fab180" 5
- 1
- The name of the volume. This will be how it is identified via persistent volume claims or from pods.
- 2
- The amount of storage allocated to this volume.
- 3
- This defines the volume type being used, in this case the cinder plug-in.
- 4
- File system type to mount.
- 5
- This is the Cinder volume that will be used.
Changing the value of the fstype
parameter after the volume has been formatted and provisioned can result in data loss and pod failure.
Save your definition to a file, for example cinder-pv.yaml, and create the persistent volume:
# oc create -f cinder-pv.yaml persistentvolume "pv0001" created
Verify that the persistent volume was created:
# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 2s
Users can then request storage using persistent volume claims, which can now utilize your new persistent volume.
Persistent volume claims only exist in the user’s namespace and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume from a different namespace causes the pod to fail.
21.4.2.2. Volume Format
Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType
parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system.
This allows using unformatted Cinder volumes as persistent volumes, because OpenShift Container Platform formats them before the first use.
21.5. Persistent Storage Using Ceph Rados Block Device (RBD)
21.5.1. Overview
OpenShift Container Platform clusters can be provisioned with persistent storage using Ceph RBD.
Persistent volumes (PVs) and persistent volume claims (PVCs) can share volumes across a single project. While the Ceph RBD-specific information contained in a PV definition could also be defined directly in a pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts.
This topic presumes some familiarity with OpenShift Container Platform and Ceph RBD. See the Persistent Storage concept topic for details on the OpenShift Container Platform persistent volume (PV) framework in general.
Project and namespace are used interchangeably throughout this document. See Projects and Users for details on the relationship.
High-availability of storage in the infrastructure is left to the underlying storage provider.
21.5.2. Provisioning
To provision Ceph volumes, the following are required:
- An existing storage device in your underlying infrastructure.
- The Ceph key to be used in an OpenShift Container Platform secret object.
- The Ceph image name.
- The file system type on top of the block storage (e.g., ext4).
ceph-common installed on each schedulable OpenShift Container Platform node in your cluster:
# yum install ceph-common
21.5.2.1. Creating the Ceph Secret
Define the authorization key in a secret configuration, which is then converted to base64 for use by OpenShift Container Platform.
In order to use Ceph storage to back a persistent volume, the secret must be created in the same project as the PVC and pod. The secret cannot simply be in the default project.
Run
ceph auth get-key
on a Ceph MON node to display the key value for theclient.admin
user:apiVersion: v1 kind: Secret metadata: name: ceph-secret data: key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ==
Save the secret definition to a file, for example ceph-secret.yaml, then create the secret:
$ oc create -f ceph-secret.yaml
Verify that the secret was created:
# oc get secret ceph-secret NAME TYPE DATA AGE ceph-secret Opaque 1 23d
21.5.2.2. Creating the Persistent Volume
Ceph RBD does not support the 'Recycle' reclaim policy.
Developers request Ceph RBD storage by referencing either a PVC, or the Gluster volume plug-in directly in the volumes
section of a pod specification. A PVC exists only in the user’s namespace and can be referenced only by pods within that same namespace. Any attempt to access a PV from a different namespace causes the pod to fail.
Define the PV in an object definition before creating it in OpenShift Container Platform:
Example 21.4. Persistent Volume Object Definition Using Ceph RBD
apiVersion: v1 kind: PersistentVolume metadata: name: ceph-pv 1 spec: capacity: storage: 2Gi 2 accessModes: - ReadWriteOnce 3 rbd: 4 monitors: 5 - 192.168.122.133:6789 pool: rbd image: ceph-image user: admin secretRef: name: ceph-secret 6 fsType: ext4 7 readOnly: false persistentVolumeReclaimPolicy: Retain
- 1
- The name of the PV that is referenced in pod definitions or displayed in various
oc
volume commands. - 2
- The amount of storage allocated to this volume.
- 3
accessModes
are used as labels to match a PV and a PVC. They currently do not define any form of access control. All block storage is defined to be single user (non-shared storage).- 4
- The volume type being used, in this case the rbd plug-in.
- 5
- An array of Ceph monitor IP addresses and ports.
- 6
- The Ceph secret used to create a secure connection from OpenShift Container Platform to the Ceph server.
- 7
- The file system type mounted on the Ceph RBD block device.
ImportantChanging the value of the
fstype
parameter after the volume has been formatted and provisioned can result in data loss and pod failure.Save your definition to a file, for example ceph-pv.yaml, and create the PV:
# oc create -f ceph-pv.yaml
Verify that the persistent volume was created:
# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE ceph-pv <none> 2147483648 RWO Available 2s
Create a PVC that will bind to the new PV:
Example 21.5. PVC Object Definition
Save the definition to a file, for example ceph-claim.yaml, and create the PVC:
# oc create -f ceph-claim.yaml
21.5.3. Ceph Volume Security
See the full Volume Security topic before implementing Ceph RBD volumes.
A significant difference between shared volumes (NFS and GlusterFS) and block volumes (Ceph RBD, iSCSI, and most cloud storage), is that the user and group IDs defined in the pod definition or container image are applied to the target physical storage. This is referred to as managing ownership of the block device. For example, if the Ceph RBD mount has its owner set to 123 and its group ID set to 567, and if the pod defines its runAsUser
set to 222 and its fsGroup
to be 7777, then the Ceph RBD physical mount’s ownership will be changed to 222:7777.
Even if the user and group IDs are not defined in the pod specification, the resulting pod may have defaults defined for these IDs based on its matching SCC, or its project. See the full Volume Security topic which covers storage aspects of SCCs and defaults in greater detail.
A pod defines the group ownership of a Ceph RBD volume using the fsGroup
stanza under the pod’s securityContext
definition:
21.6. Persistent Storage Using AWS Elastic Block Store
21.6.1. Overview
OpenShift Container Platform supports AWS Elastic Block Store volumes (EBS). You can provision your OpenShift Container Platform cluster with persistent storage using AWS EC2. Some familiarity with Kubernetes and AWS is assumed.
Before creating persistent volumes using AWS, OpenShift Container Platform must first be properly configured for AWS ElasticBlockStore.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. AWS Elastic Block Store volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims, however, are specific to a project or namespace and can be requested by users.
High-availability of storage in the infrastructure is left to the underlying storage provider.
21.6.2. Provisioning
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. After ensuring OpenShift is configured for AWS Elastic Block Store, all that is required for OpenShift and AWS is an AWS EBS volume ID and the PersistentVolume
API.
21.6.2.1. Creating the Persistent Volume
AWS does not support the 'Recycle' reclaim policy.
You must define your persistent volume in an object definition before creating it in OpenShift Container Platform:
Example 21.6. Persistent Volume Object Definition Using AWS
apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5Gi" 2 accessModes: - "ReadWriteOnce" awsElasticBlockStore: 3 fsType: "ext4" 4 volumeID: "vol-f37a03aa" 5
- 1
- The name of the volume. This will be how it is identified via persistent volume claims or from pods.
- 2
- The amount of storage allocated to this volume.
- 3
- This defines the volume type being used, in this case the awsElasticBlockStore plug-in.
- 4
- File system type to mount.
- 5
- This is the AWS volume that will be used.
Changing the value of the fstype
parameter after the volume has been formatted and provisioned can result in data loss and pod failure.
Save your definition to a file, for example aws-pv.yaml, and create the persistent volume:
# oc create -f aws-pv.yaml persistentvolume "pv0001" created
Verify that the persistent volume was created:
# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 2s
Users can then request storage using persistent volume claims, which can now utilize your new persistent volume.
Persistent volume claims only exist in the user’s namespace and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume from a different namespace causes the pod to fail.
21.6.2.2. Volume Format
Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType
parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system.
This allows using unformatted AWS volumes as persistent volumes, because OpenShift Container Platform formats them before the first use.
21.7. Persistent Storage Using GCE Persistent Disk
21.7.1. Overview
OpenShift Container Platform supports GCE Persistent Disk volumes (gcePD). You can provision your OpenShift Container Platform cluster with persistent storage using GCE. Some familiarity with Kubernetes and GCE is assumed.
Before creating persistent volumes using GCE, OpenShift Container Platform must first be properly configured for GCE Persistent Disk.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. GCE Persistent Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims, however, are specific to a project or namespace and can be requested by users.
High-availability of storage in the infrastructure is left to the underlying storage provider.
21.7.2. Provisioning
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. After ensuring OpenShift Container Platform is configured for GCE PersistentDisk, all that is required for OpenShift Container Platform and GCE is an GCE Persistent Disk volume ID and the PersistentVolume
API.
21.7.2.1. Creating the Persistent Volume
GCE does not support the 'Recycle' reclaim policy.
You must define your persistent volume in an object definition before creating it in OpenShift Container Platform:
Example 21.7. Persistent Volume Object Definition Using GCE
apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5Gi" 2 accessModes: - "ReadWriteOnce" gcePersistentDisk: 3 fsType: "ext4" 4 pdName: "pd-disk-1" 5
- 1
- The name of the volume. This will be how it is identified via persistent volume claims or from pods.
- 2
- The amount of storage allocated to this volume.
- 3
- This defines the volume type being used, in this case the gcePersistentDisk plug-in.
- 4
- File system type to mount.
- 5
- This is the GCE Persistent Disk volume that will be used.
Changing the value of the fstype
parameter after the volume has been formatted and provisioned can result in data loss and pod failure.
Save your definition to a file, for example gce-pv.yaml, and create the persistent volume:
# oc create -f gce-pv.yaml persistentvolume "pv0001" created
Verify that the persistent volume was created:
# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 2s
Users can then request storage using persistent volume claims, which can now utilize your new persistent volume.
Persistent volume claims only exist in the user’s namespace and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume from a different namespace causes the pod to fail.
21.7.2.2. Volume Format
Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType
parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system.
This allows using unformatted GCE volumes as persistent volumes, because OpenShift Container Platform formats them before the first use.
21.7.2.3. Multi-zone Configuration
In multi-zone configurations, You must specify failure-domain.beta.kubernetes.io/region
and failure-domain.beta.kubernetes.io/zone
PV labels to match the zone where GCE volume exists.
Example 21.8. Persistent Volume Object With Failure Domain
apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" labels: failure-domain.beta.kubernetes.io/region: "us-central1" 1 failure-domain.beta.kubernetes.io/zone: "us-central1-a" 2 spec: capacity: storage: "5Gi" accessModes: - "ReadWriteOnce" gcePersistentDisk: fsType: "ext4" pdName: "pd-disk-1"
21.8. Persistent Storage Using iSCSI
21.8.1. Overview
You can provision your OpenShift Container Platform cluster with persistent storage using iSCSI. Some familiarity with Kubernetes and iSCSI is assumed.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure.
High-availability of storage in the infrastructure is left to the underlying storage provider.
21.8.2. Provisioning
Verify that the storage exists in the underlying infrastructure before mounting it as a volume in OpenShift Container Platform. All that is required for the iSCSI is the iSCSI target portal, a valid iSCSI Qualified Name (IQN), a valid LUN number, the filesystem type, and the PersistentVolume
API.
Optionally, multipath portals and Challenge Handshake Authentication Protocol (CHAP) configuration can be provided.
iSCSI does not support the 'Recycle' reclaim policy.
Example 21.9. Persistent Volume Object Definition
apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4' readOnly: false chapAuthDiscovery: true chapAuthSession: true secretRef: name: chap-secret
21.8.2.1. Enforcing Disk Quotas
Use LUN partitions to enforce disk quotas and size constraints. Each LUN is one persistent volume. Kubernetes enforces unique names for persistent volumes.
Enforcing quotas in this way allows the end user to request persistent storage by a specific amount (e.g, 10Gi) and be matched with a corresponding volume of equal or greater capacity.
21.8.2.2. iSCSI Volume Security
Users request storage with a PersistentVolumeClaim
. This claim only lives in the user’s namespace and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume across a namespace causes the pod to fail.
Each iSCSI LUN must be accessible by all nodes in the cluster.
21.9. Persistent Storage Using Fibre Channel
21.9.1. Overview
You can provision your OpenShift Container Platform cluster with persistent storage using Fibre Channel. Some familiarity with Kubernetes and Fibre Channel is assumed.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure.
High-availability of storage in the infrastructure is left to the underlying storage provider.
21.9.2. Provisioning
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. All that is required for Fibre Channel persistent storage is the targetWWNs (array of Fibre Channel target’s World Wide Names), a valid LUN number, filesystem type, and the PersistentVolume
API. Persistent volume and a LUN have one-to-one mapping between them.
Fiber Channel does not support the 'Recycle' reclaim policy.
Persistent Volumes Object Definition
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
fc:
targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 1
lun: 2
fsType: ext4
- 1
- Fibre Channel WWNs are identified as
/dev/disk/by-path/pci-<IDENTIFIER>-fc-0x<WWN>-lun-<LUN#>
, but you do not need to provide any part of the path leading up to theWWN
, including the0x
, and anything after, including the-
(hyphen).
Changing the value of the fstype
parameter after the volume has been formatted and provisioned can result in data loss and pod failure.
21.9.2.1. Enforcing Disk Quotas
Use LUN partitions to enforce disk quotas and size constraints. Each LUN is one persistent volume. Kubernetes enforces unique names for persistent volumes.
Enforcing quotas in this way allows the end user to request persistent storage by a specific amount (e.g, 10Gi) and be matched with a corresponding volume of equal or greater capacity.
21.9.2.2. Fibre Channel Volume Security
Users request storage with a PersistentVolumeClaim
. This claim only lives in the user’s namespace and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume across a namespace causes the pod to fail.
Each Fibre Channel LUN must be accessible by all nodes in the cluster.
21.10. Persistent Storage Using Azure Disk
21.10.1. Overview
OpenShift Container Platform supports Microsoft Azure Disk volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure.
Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims, however, are specific to a project or namespace and can be requested by users.
High availability of storage in the infrastructure is left to the underlying storage provider.
21.10.2. Prerequisites
Before creating persistent volumes using Azure, ensure your OpenShift Container Platform cluster meets the following requirements:
- OpenShift Container Platform must first be configured for Azure Disk.
- Each node host in the infrastructure must match the Azure virtual machine name.
- Each node host must be in the same resource group.
21.10.3. Provisioning
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. After ensuring OpenShift Container Platform is configured for Azure Disk, all that is required for OpenShift Container Platform and Azure is an Azure Disk Name and Disk URI and the PersistentVolume
API.
21.10.3.1. Creating the Persistent Volume
Azure does not support the Recycle reclaim policy.
You must define your persistent volume in an object definition before creating it in OpenShift Container Platform:
Example 21.10. Persistent Volume Object Definition Using Azure
apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5Gi" 2 accessModes: - "ReadWriteOnce" azureDisk: 3 diskName: test2.vhd 4 diskURI: https://someacount.blob.core.windows.net/vhds/test2.vhd 5 cachingMode: ReadWrite 6 fsType: ext4 7 readOnly: false 8
- 1
- The name of the volume. This will be how it is identified via persistent volume claims or from pods.
- 2
- The amount of storage allocated to this volume.
- 3
- This defines the volume type being used (azureDisk plug-in, in this example).
- 4
- The name of the data disk in the blob storage.
- 5
- The URI the the data disk in the blob storage.
- 6
- Host caching mode: None, ReadOnly, or ReadWrite.
- 7
- File system type to mount (for example,
ext4
,xfs
, and so on). - 8
- Defaults to
false
(read/write).ReadOnly
here will force theReadOnly
setting inVolumeMounts
.
Changing the value of the fsType
parameter after the volume is formatted and provisioned can result in data loss and pod failure.
Save your definition to a file, for example azure-pv.yaml, and create the persistent volume:
# oc create -f azure-pv.yaml persistentvolume "pv0001" created
Verify that the persistent volume was created:
# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 2s
Now you can request storage using persistent volume claims, which can now use your new persistent volume.
Persistent volume claims only exist in the user’s namespace and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume from a different namespace causes the pod to fail.
21.10.3.2. Volume Format
Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType
parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system.
This allows unformatted Azure volumes to be used as persistent volumes because OpenShift Container Platform formats them before the first use.
21.11. Persistent Storage Using Azure File
21.11.1. Overview
OpenShift Container Platform supports Azure File volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed.
High availability of storage in the infrastructure is left to the underlying storage provider.
21.11.2. Before you begin
Install
samba-client
,samba-common
, andcifs-utils
on all nodes:$ sudo yum install samba-client samba-common cifs-utils
Enable SELinux booleans on all nodes:
$ /usr/sbin/setsebool -P virt_use_samba on $ /usr/sbin/setsebool -P virt_sandbox_use_samba on
21.11.3. Creating the Persistent Volume
Azure File does not support the Recycle reclaim policy.
21.11.4. Creating the Azure Storage Account Secret
Define the Azure Storage Account name and key in a secret configuration, which is then converted to base64 for use by OpenShift Container Platform.
Obtain an Azure Storage Account name and key and encode to base64:
apiVersion: v1 kind: Secret metadata: name: azure-secret type: Opaque data: azurestorageaccountname: azhzdGVzdA== azurestorageaccountkey: eElGMXpKYm5ub2pGTE1Ta0JwNTBteDAyckhzTUsyc2pVN21GdDRMMTNob0I3ZHJBYUo4akQ2K0E0NDNqSm9nVjd5MkZVT2hRQ1dQbU02WWFOSHk3cWc9PQ==
Save the secret definition to a file, for example azure-secret.yaml, then create the secret:
$ oc create -f azure-secret.yaml
Verify that the secret was created:
$ oc get secret azure-secret NAME TYPE DATA AGE azure-secret Opaque 1 23d
You must define your persistent volume in an object definition before creating it in OpenShift Container Platform:
Persistent Volume Object Definition Using Azure File
apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5120Gi" 2 accessModes: - "ReadWriteMany" azureFile: 3 secretName: azure-secret 4 shareName: example 5 readOnly: false 6
- 1
- The name of the volume. This is how it is identified via persistent volume claims or from pods.
- 2
- The amount of storage allocated to this volume.
- 3
- This defines the volume type being used: azureFile plug-in.
- 4
- The name of the secret used.
- 5
- The name of the file share.
- 6
- Defaults to
false
(read/write).ReadOnly
here forces theReadOnly
setting inVolumeMounts
.Save your definition to a file, for example azure-file-pv.yaml, and create the persistent volume:
$ oc create -f azure-file-pv.yaml persistentvolume "pv0001" created
Verify that the persistent volume was created:
$ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWM Available 2s
Now you can request storage using persistent volume claims, which can now use your new persistent volume.
Persistent volume claims only exist in the user’s namespace and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume from a different namespace causes the pod to fail.
21.12. Dynamic Provisioning and Creating Storage Classes
21.12.1. Overview
The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators (cluster-admin
) or Storage Administrators (storage-admin
) define and create the StorageClass objects that users can request without needing any intimate knowledge about the underlying storage volume sources.
The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure.
Many storage types are available for use as persistent volumes in OpenShift Container Platform. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plug-in APIs.
21.12.2. Available Dynamically Provisioned Plug-ins
OpenShift Container Platform provides the following provisioner plug-ins, which have generic implementations for dynamic provisioning that use the cluster’s configured provider’s API to create new storage resources:
Storage Type | Provisioner Plug-in Name | Required Configuration | Notes |
---|---|---|---|
OpenStack Cinder |
| ||
AWS Elastic Block Store (EBS) |
|
For dynamic provisioning when using multiple clusters in different zones, tag each node with | |
GCE Persistent Disk (gcePD) |
| In multi-zone configurations, it is advisable to run one Openshift cluster per GCE project to avoid PVs from getting created in zones where no node from current cluster exists. | |
GlusterFS |
| Container Native Storage (CNS) utilizes Heketi to manage Gluster Storage. | |
Ceph RBD |
| ||
Trident from NetApp |
| Storage orchestrator for NetApp ONTAP, SolidFire, and E-Series storage. | |
|
Any chosen provisioner plug-in also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation.
21.12.3. Defining a StorageClass
StorageClass objects are currently a globally scoped object and need to be created by cluster-admin
or storage-admin
users. There are currently five plug-ins that are supported. The following sections describe the basic object definition for a StorageClass and specific examples for each of the supported plug-in types.
21.12.3.1. Basic StorageClass Object Definition
Example 21.11. StorageClass Basic Object Definition
kind: StorageClass 1 apiVersion: storage.k8s.io/v1beta1 2 metadata: name: foo 3 annotations: 4 ... provisioner: kubernetes.io/plug-in-type 5 parameters: 6 param1: value ... paramN: value
- 1
- (required) The API object type.
- 2
- (required) The current apiVersion.
- 3
- (required) The name of the StorageClass.
- 4
- (optional) Annotations for the StorageClass
- 5
- (required) The type of provisioner associated with this storage class.
- 6
- (optional) The parameters required for the specific provisioner, this will change from plug-in to plug-in.
21.12.3.2. StorageClass Annotations
To set a StorageClass as the cluster-wide default:
storageclass.beta.kubernetes.io/is-default-class: "true"
This enables any Persistent Volume Claim (PVC) that does not specify a specific volume to automatically be provisioned through the default StorageClass
To set a StorageClass description:
kubernetes.io/description: My StorageClass Description
21.12.3.3. OpenStack Cinder Object Definition
21.12.3.4. AWS ElasticBlockStore (EBS) Object Definition
Example 21.13. aws-ebs-storageclass.yaml
kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: slow provisioner: kubernetes.io/aws-ebs parameters: type: io1 1 zone: us-east-1d 2 zones: us-east-1d, us-east-1c 3 iopsPerGB: "10" 4 encrypted: "true" 5 kmsKeyId: keyvalue 6
- 1
- Select from
io1
,gp2
,sc1
,st1
. The default isgp2
. See AWS documentation for valid Amazon Resource Name (ARN) values. - 2
- AWS zone. If no zone is specified, volumes are generally round-robined across all active zones where the OpenShift Container Platform cluster has a node. Zone and zones parameters must not be used at the same time.
- 3
- A comma-separated list of AWS zone(s). If no zone is specified, volumes are generally round-robined across all active zones where the OpenShift Container Platform cluster has a node. Zone and zones parameters must not be used at the same time.
- 4
- Only for io1 volumes. I/O operations per second per GiB. The AWS volume plug-in multiplies this with the size of the requested volume to compute IOPS of the volume. The value cap is 20,000 IOPS, which is the maximum supported by AWS. See AWS documentation for further details.
- 5
- Denotes whether to encrypt the EBS volume. Valid values are
true
orfalse
. - 6
- Optional. The full ARN of the key to use when encrypting the volume. If none is supplied, but
encypted
is set totrue
, then AWS generates a key. See AWS documentation for a valid ARN value.
21.12.3.5. GCE PersistentDisk (gcePD) Object Definition
Example 21.14. gce-pd-storageclass.yaml
kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: slow provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 1 zone: us-central1-a 2
21.12.3.6. GlusterFS Object Definition
Example 21.15. glusterfs-storageclass.yaml
kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: slow provisioner: kubernetes.io/glusterfs parameters: resturl: "http://127.0.0.1:8081" 1 restuser: "admin" 2 secretName: "heketi-secret" 3 secretNamespace: "default" 4 gidMin: "40000" 5 gidMax: "50000" 6
- 1
- Gluster REST service/Heketi service URL that provisions Gluster volumes on demand. The general format should be
{http/https}://{IPaddress}:{Port}
. This is a mandatory parameter for the GlusterFS dynamic provisioner. If the Heketi service is exposed as a routable service in the OpenShift Container Platform, it will have a resolvable fully qualified domain name and Heketi service URL. For additional information and configuration, See Container-Native Storage for OpenShift Container Platform. - 2
- Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool.
- 3
- Identification of a Secret instance that contains a user password to use when talking to the Gluster REST service. Optional; an empty password will be used when both
secretNamespace
andsecretName
are omitted. The provided secret must be of type"kubernetes.io/glusterfs"
. - 4
- The namespace of mentioned
secretName
. Optional; an empty password will be used when bothsecretNamespace
andsecretName
are omitted. The provided secret must be of type"kubernetes.io/glusterfs"
. - 5
- Optional. The minimum value of GID range for the storage class.
- 6
- Optional. The maximum value of GID range for the storage class.
When the gidMin
and gidMax
values are not specified, the volume is provisioned with a value between 2000 and 2147483647, which are defaults for gidMin
and gidMax
respectively. If specified, a unique value (GID) in this range (gidMin-gidMax
) is used for dynamically provisioned volumes. The GID of the provisioned volume will be set to this value. It is required to run Heketi version 3 or later to make use of this feature. This GID is released from the pool when the subjected volume is deleted. The GID pool is per storage class, if 2 or more storage classes have GID ranges that overlap there will be duplicate GIDs dispatched by the provisioner.
When the persistent volumes are dynamically provisioned, the Gluster plug-in automatically creates an endpoint and a headless service of the name gluster-dynamic-<claimname>
. When the persistent volume claim is deleted, this dynamic endpoint and service is deleted automatically.
Example of a Secret
apiVersion: v1 kind: Secret metadata: name: heketi-secret namespace: default data: # base64 encoded password. E.g.: echo -n "mypassword" | base64 key: bXlwYXNzd29yZA== type: kubernetes.io/glusterfs
21.12.3.7. Ceph RBD Object Definition
Example 21.16. ceph-storageclass.yaml
apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: fast provisioner: kubernetes.io/rbd parameters: monitors: 10.16.153.105:6789 1 adminId: kube 2 adminSecretName: ceph-secret 3 adminSecretNamespace: kube-system 4 pool: kube 5 userId: kube 6 userSecretName: ceph-secret-user 7
- 1
- Ceph monitors, comma-delimited. It is required.
- 2
- Ceph client ID that is capable of creating images in the pool. Default is "admin".
- 3
- Secret Name for
adminId
. It is required. The provided secret must have type "kubernetes.io/rbd". - 4
- The namespace for
adminSecret
. Default is "default". - 5
- Ceph RBD pool. Default is "rbd".
- 6
- Ceph client ID that is used to map the Ceph RBD image. Default is the same as
adminId
. - 7
- The name of Ceph Secret for
userId
to map Ceph RBD image. It must exist in the same namespace as PVCs. It is required. - File system that is created on dynamically provisioned volumes. This value is copied to the
fsType
field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value isext4
.
21.12.3.8. Trident Object Definition
Example 21.17. trident.yaml
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gold provisioner: netapp.io/trident 1 parameters: 2 media: "ssd" provisioningType: "thin" snapshots: "true"
Trident uses the parameters as selection criteria for the different pools of storage that are registered with it. Trident itself is configured separately.
- 1
- For more information about installing Trident with OpenShift Container Platform, see the Trident documentation.
- 2
- For more information about supported parameters, see the storage attributes section of the Trident documentation.
21.12.3.9. VMware vSphere Object Definition
Example 21.18. vsphere-storageclass.yaml
kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: slow provisioner: kubernetes.io/vsphere-volume 1 parameters: diskformat: thin 2
- 1
- For more information about using VMware vSphere with OpenShift Container Platform, see the VMware vSphere documentation.
- 2
diskformat
:thin
,zeroedthick
andeagerzeroedthick
. See vSphere docs for details. Default:thin
21.12.4. Changing the Default StorageClass
If you are using GCE and AWS, use the following process to change the default StorageClass:
List the StorageClass (
(default)
denotes the default StorageClass):$ oc get storageclass NAME TYPE gp2 (default) kubernetes.io/aws-ebs standard kubernetes.io/gce-pd
Change the value of the annotation
storageclass.kubernetes.io/is-default-class
tofalse
for the default StorageClass:$ oc patch storageclass gp2 -p '{"metadata": {"annotations": \ {"storageclass.kubernetes.io/is-default-class": "false"}}}'
Make another StorageClass the default by adding or modifying the annotation as
storageclass.kubernetes.io/is-default-class=true
:$ oc patch storageclass standard -p '{"metadata": {"annotations": \ {"storageclass.kubernetes.io/is-default-class": "true"}}}'
Verify the changes:
$ oc get storageclass NAME TYPE gp2 kubernetes.io/aws-ebs standard (default) kubernetes.io/gce-pd
21.12.5. Additional Information and Examples
21.13. Volume Security
21.13.1. Overview
This topic provides a general guide on pod security as it relates to volume security. For information on pod-level security in general, see Managing Security Context Constraints (SCC) and the Security Context Constraint concept topic. For information on the OpenShift Container Platform persistent volume (PV) framework in general, see the Persistent Storage concept topic.
Accessing persistent storage requires coordination between the cluster and/or storage administrator and the end developer. The cluster administrator creates PVs, which abstract the underlying physical storage. The developer creates pods and, optionally, PVCs, which bind to PVs, based on matching criteria, such as capacity.
Multiple persistent volume claims (PVCs) within the same project can bind to the same PV. However, once a PVC binds to a PV, that PV cannot be bound by a claim outside of the first claim’s project. If the underlying storage needs to be accessed by multiple projects, then each project needs its own PV, which can point to the same physical storage. In this sense, a bound PV is tied to a project. For a detailed PV and PVC example, see the guide for WordPress and MySQL using NFS.
For the cluster administrator, granting pods access to PVs involves:
- knowing the group ID and/or user ID assigned to the actual storage,
- understanding SELinux considerations, and
- ensuring that these IDs are allowed in the range of legal IDs defined for the project and/or the SCC that matches the requirements of the pod.
Group IDs, the user ID, and SELinux values are defined in the SecurityContext
section in a pod definition. Group IDs are global to the pod and apply to all containers defined in the pod. User IDs can also be global, or specific to each container. Four sections control access to volumes:
21.13.2. SCCs, Defaults, and Allowed Ranges
SCCs influence whether or not a pod is given a default user ID, fsGroup
ID, supplemental group ID, and SELinux label. They also influence whether or not IDs supplied in the pod definition (or in the image) will be validated against a range of allowable IDs. If validation is required and fails, then the pod will also fail.
SCCs define strategies, such as runAsUser
, supplementalGroups
, and fsGroup
. These strategies help decide whether the pod is authorized. Strategy values set to RunAsAny are essentially stating that the pod can do what it wants regarding that strategy. Authorization is skipped for that strategy and no OpenShift Container Platform default is produced based on that strategy. Therefore, IDs and SELinux labels in the resulting container are based on container defaults instead of OpenShift Container Platform policies.
For a quick summary of RunAsAny:
- Any ID defined in the pod definition (or image) is allowed.
- Absence of an ID in the pod definition (and in the image) results in the container assigning an ID, which is root (0) for Docker.
- No SELinux labels are defined, so Docker will assign a unique label.
For these reasons, SCCs with RunAsAny for ID-related strategies should be protected so that ordinary developers do not have access to the SCC. On the other hand, SCC strategies set to MustRunAs or MustRunAsRange trigger ID validation (for ID-related strategies), and cause default values to be supplied by OpenShift Container Platform to the container when those values are not supplied directly in the pod definition or image.
Allowing access to SCCs with a RunAsAny FSGroup
strategy can also prevent users from accessing their block devices. Pods need to specify an fsGroup
in order to take over their block devices. Normally, this is done when the SCC FSGroup
strategy is set to MustRunAs. If a user’s pod is assigned an SCC with a RunAsAny FSGroup
strategy, then the user may face permission denied errors until they discover that they need to specify an fsGroup
themselves.
SCCs may define the range of allowed IDs (user or groups). If range checking is required (for example, using MustRunAs) and the allowable range is not defined in the SCC, then the project determines the ID range. Therefore, projects support ranges of allowable ID. However, unlike SCCs, projects do not define strategies, such as runAsUser
.
Allowable ranges are helpful not only because they define the boundaries for container IDs, but also because the minimum value in the range becomes the default value for the ID in question. For example, if the SCC ID strategy value is MustRunAs, the minimum value of an ID range is 100, and the ID is absent from the pod definition, then 100 is provided as the default for this ID.
As part of pod admission, the SCCs available to a pod are examined (roughly, in priority order followed by most restrictive) to best match the requests of the pod. Setting a SCC’s strategy type to RunAsAny is less restrictive, whereas a type of MustRunAs is more restrictive. All of these strategies are evaluated. To see which SCC was assigned to a pod, use the oc get pod
command:
# oc get pod <pod_name> -o yaml ... metadata: annotations: openshift.io/scc: nfs-scc 1 name: nfs-pod1 2 namespace: default 3 ...
- 1
- Name of the SCC that the pod used (in this case, a custom SCC).
- 2
- Name of the pod.
- 3
- Name of the project. "Namespace" is interchangeable with "project" in OpenShift Container Platform. See Projects and Users for details.
It may not be immediately obvious which SCC was matched by a pod, so the command above can be very useful in understanding the UID, supplemental groups, and SELinux relabeling in a live container.
Any SCC with a strategy set to RunAsAny allows specific values for that strategy to be defined in the pod definition (and/or image). When this applies to the user ID (runAsUser
) it is prudent to restrict access to the SCC to prevent a container from being able to run as root.
Because pods often match the restricted SCC, it is worth knowing the security this entails. The restricted SCC has the following characteristics:
-
User IDs are constrained due to the
runAsUser
strategy being set to MustRunAsRange. This forces user ID validation. -
Because a range of allowable user IDs is not defined in the SCC (see
oc export scc restricted
for more details), the project’sopenshift.io/sa.scc.uid-range
range will be used for range checking and for a default ID, if needed. -
A default user ID is produced when a user ID is not specified in the pod definition and the matching SCC’s
runAsUser
is set to MustRunAsRange. -
An SELinux label is required (
seLinuxContext
set to MustRunAs), which uses the project’s default MCS label. -
fsGroup
IDs are constrained to a single value due to theFSGroup
strategy being set to MustRunAs, which dictates that the value to use is the minimum value of the first range specified. -
Because a range of allowable
fsGroup
IDs is not defined in the SCC, the minimum value of the project’sopenshift.io/sa.scc.supplemental-groups
range (or the same range used for user IDs) will be used for validation and for a default ID, if needed. -
A default
fsGroup
ID is produced when afsGroup
ID is not specified in the pod and the matching SCC’sFSGroup
is set to MustRunAs. -
Arbitrary supplemental group IDs are allowed because no range checking is required. This is a result of the
supplementalGroups
strategy being set to RunAsAny. - Default supplemental groups are not produced for the running pod due to RunAsAny for the two group strategies above. Therefore, if no groups are defined in the pod definition (or in the image), the container(s) will have no supplemental groups predefined.
The following shows the default project and a custom SCC (my-custom-scc), which summarizes the interactions of the SCC and the project:
$ oc get project default -o yaml 1 ... metadata: annotations: 2 openshift.io/sa.scc.mcs: s0:c1,c0 3 openshift.io/sa.scc.supplemental-groups: 1000000000/10000 4 openshift.io/sa.scc.uid-range: 1000000000/10000 5 $ oc get scc my-custom-scc -o yaml ... fsGroup: type: MustRunAs 6 ranges: - min: 5000 max: 6000 runAsUser: type: MustRunAsRange 7 uidRangeMin: 1000100000 uidRangeMax: 1000100999 seLinuxContext: 8 type: MustRunAs SELinuxOptions: 9 user: <selinux-user-name> role: ... type: ... level: ... supplementalGroups: type: MustRunAs 10 ranges: - min: 5000 max: 6000
- 1
- default is the name of the project.
- 2
- Default values are only produced when the corresponding SCC strategy is not RunAsAny.
- 3
- SELinux default when not defined in the pod definition or in the SCC.
- 4
- Range of allowable group IDs. ID validation only occurs when the SCC strategy is RunAsAny. There can be more than one range specified, separated by commas. See below for supported formats.
- 5
- Same as <4> but for user IDs. Also, only a single range of user IDs is supported.
- 6 10
- MustRunAs enforces group ID range checking and provides the container’s groups default. Based on this SCC definition, the default is 5000 (the minimum ID value). If the range was omitted from the SCC, then the default would be 1000000000 (derived from the project). The other supported type, RunAsAny, does not perform range checking, thus allowing any group ID, and produces no default groups.
- 7
- MustRunAsRange enforces user ID range checking and provides a UID default. Based on this SCC, the default UID is 1000100000 (the minimum value). If the minimum and maximum range were omitted from the SCC, the default user ID would be 1000000000 (derived from the project). MustRunAsNonRoot and RunAsAny are the other supported types. The range of allowed IDs can be defined to include any user IDs required for the target storage.
- 8
- When set to MustRunAs, the container is created with the SCC’s SELinux options, or the MCS default defined in the project. A type of RunAsAny indicates that SELinux context is not required, and, if not defined in the pod, is not set in the container.
- 9
- The SELinux user name, role name, type, and labels can be defined here.
Two formats are supported for allowed ranges:
-
M/N
, whereM
is the starting ID andN
is the count, so the range becomesM
through (and including)M+N-1
. -
M-N
, whereM
is again the starting ID andN
is the ending ID. The default group ID is the starting ID in the first range, which is1000000000
in this project. If the SCC did not define a minimum group ID, then the project’s default ID is applied.
21.13.3. Supplemental Groups
Read SCCs, Defaults, and Allowed Ranges before working with supplemental groups.
Supplemental groups are regular Linux groups. When a process runs in Linux, it has a UID, a GID, and one or more supplemental groups. These attributes can be set for a container’s main process. The supplementalGroups
IDs are typically used for controlling access to shared storage, such as NFS and GlusterFS, whereas fsGroup is used for controlling access to block storage, such as Ceph RBD and iSCSI.
The OpenShift Container Platform shared storage plug-ins mount volumes such that the POSIX permissions on the mount match the permissions on the target storage. For example, if the target storage’s owner ID is 1234 and its group ID is 5678, then the mount on the host node and in the container will have those same IDs. Therefore, the container’s main process must match one or both of those IDs in order to access the volume.
For example, consider the following NFS export.
On an OpenShift Container Platform node:
showmount
requires access to the ports used by rpcbind
and rpc.mount
on the NFS server
# showmount -e <nfs-server-ip-or-hostname> Export list for f21-nfs.vm: /opt/nfs *
On the NFS server:
# cat /etc/exports /opt/nfs *(rw,sync,root_squash) ... # ls -lZ /opt/nfs -d drwx------. 1000100001 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs
The /opt/nfs/ export is accessible by UID 1000100001 and the group 5555. In general, containers should not run as root. So, in this NFS example, containers which are not run as UID 1000100001 and are not members the group 5555 will not have access to the NFS export.
Often, the SCC matching the pod does not allow a specific user ID to be specified, thus using supplemental groups is a more flexible way to grant storage access to a pod. For example, to grant NFS access to the export above, the group 5555 can be defined in the pod definition:
apiVersion: v1 kind: Pod ... spec: containers: - name: ... volumeMounts: - name: nfs 1 mountPath: /usr/share/... 2 securityContext: 3 supplementalGroups: [5555] 4 volumes: - name: nfs 5 nfs: server: <nfs_server_ip_or_host> path: /opt/nfs 6
- 1
- Name of the volume mount. Must match the name in the
volumes
section. - 2
- NFS export path as seen in the container.
- 3
- Pod global security context. Applies to all containers inside the pod. Each container can also define its
securityContext
, however group IDs are global to the pod and cannot be defined for individual containers. - 4
- Supplemental groups, which is an array of IDs, is set to 5555. This grants group access to the export.
- 5
- Name of the volume. Must match the name in the
volumeMounts
section. - 6
- Actual NFS export path on the NFS server.
All containers in the above pod (assuming the matching SCC or project allows the group 5555) will be members of the group 5555 and have access to the volume, regardless of the container’s user ID. However, the assumption above is critical. Sometimes, the SCC does not define a range of allowable group IDs but instead requires group ID validation (a result of supplementalGroups
set to MustRunAs). Note that this is not the case for the restricted SCC. The project will not likely allow a group ID of 5555, unless the project has been customized to access this NFS export. So, in this scenario, the above pod will fail because its group ID of 5555 is not within the SCC’s or the project’s range of allowed group IDs.
Supplemental Groups and Custom SCCs
To remedy the situation in the previous example, a custom SCC can be created such that:
- a minimum and max group ID are defined,
- ID range checking is enforced, and
- the group ID of 5555 is allowed.
It is often better to create a new SCC rather than modifying a predefined SCC, or changing the range of allowed IDs in the predefined projects.
The easiest way to create a new SCC is to export an existing SCC and customize the YAML file to meet the requirements of the new SCC. For example:
Use the restricted SCC as a template for the new SCC:
$ oc export scc restricted > new-scc.yaml
- Edit the new-scc.yaml file to your desired specifications.
Create the new SCC:
$ oc create -f new-scc.yaml
The oc edit scc
command can be used to modify an instantiated SCC.
Here is a fragment of a new SCC named nfs-scc:
$ oc export scc nfs-scc allowHostDirVolumePlugin: false 1 ... kind: SecurityContextConstraints metadata: ... name: nfs-scc 2 priority: 9 3 ... supplementalGroups: type: MustRunAs 4 ranges: - min: 5000 5 max: 6000 ...
- 1
- The
allow
booleans are the same as for the restricted SCC. - 2
- Name of the new SCC.
- 3
- Numerically larger numbers have greater priority. Nil or omitted is the lowest priority. Higher priority SCCs sort before lower priority SCCs and thus have a better chance of matching a new pod.
- 4
supplementalGroups
is a strategy and it is set to MustRunAs, which means group ID checking is required.- 5
- Multiple ranges are supported. The allowed group ID range here is 5000 through 5999, with the default supplemental group being 5000.
When the same pod shown earlier runs against this new SCC (assuming, of course, the pod matches the new SCC), it will start because the group 5555, supplied in the pod definition, is now allowed by the custom SCC.
21.13.4. fsGroup
Read SCCs, Defaults, and Allowed Ranges before working with supplemental groups.
It is generally preferable to use group IDs (supplemental or fsGroup
) to gain access to persistent storage versus using user IDs.
fsGroup
defines a pod’s "file system group" ID, which is added to the container’s supplemental groups. The supplementalGroups
ID applies to shared storage, whereas the fsGroup
ID is used for block storage.
Block storage, such as Ceph RBD, iSCSI, and various cloud storage, is typically dedicated to a single pod which has requested the block storage volume, either directly or using a PVC. Unlike shared storage, block storage is taken over by a pod, meaning that user and group IDs supplied in the pod definition (or image) are applied to the actual, physical block device. Typically, block storage is not shared.
A fsGroup
definition is shown below in the following pod definition fragment:
kind: Pod ... spec: containers: - name: ... securityContext: 1 fsGroup: 5555 2 ...
As with supplementalGroups
, all containers in the above pod (assuming the matching SCC or project allows the group 5555) will be members of the group 5555, and will have access to the block volume, regardless of the container’s user ID. If the pod matches the restricted SCC, whose fsGroup
strategy is MustRunAs, then the pod will fail to run. However, if the SCC has its fsGroup
strategy set to RunAsAny, then any fsGroup
ID (including 5555) will be accepted. Note that if the SCC has its fsGroup
strategy set to RunAsAny and no fsGroup
ID is specified, the "taking over" of the block storage does not occur and permissions may be denied to the pod.
fsGroups and Custom SCCs
To remedy the situation in the previous example, a custom SCC can be created such that:
- a minimum and maximum group ID are defined,
- ID range checking is enforced, and
- the group ID of 5555 is allowed.
It is better to create new SCCs versus modifying a predefined SCC, or changing the range of allowed IDs in the predefined projects.
Consider the following fragment of a new SCC definition:
# oc export scc new-scc ... kind: SecurityContextConstraints ... fsGroup: type: MustRunAs 1 ranges: 2 - max: 6000 min: 5000 3 ...
- 1
- MustRunAs triggers group ID range checking, whereas RunAsAny does not require range checking.
- 2
- The range of allowed group IDs is 5000 through, and including, 5999. Multiple ranges are supported but not used. The allowed group ID range here is 5000 through 5999, with the default
fsGroup
being 5000. - 3
- The minimum value (or the entire range) can be omitted from the SCC, and thus range checking and generating a default value will defer to the project’s
openshift.io/sa.scc.supplemental-groups
range.fsGroup
andsupplementalGroups
use the same group field in the project; there is not a separate range forfsGroup
.
When the pod shown above runs against this new SCC (assuming, of course, the pod matches the new SCC), it will start because the group 5555, supplied in the pod definition, is allowed by the custom SCC. Additionally, the pod will "take over" the block device, so when the block storage is viewed by a process outside of the pod, it will actually have 5555 as its group ID.
A list of volumes supporting block ownership include:
- AWS Elastic Block Store
- OpenStack Cinder
- Ceph RBD
- GCE Persistent Disk
- iSCSI
- emptyDir
- gitRepo
This list is potentially incomplete.
21.13.5. User IDs
Read SCCs, Defaults, and Allowed Ranges before working with supplemental groups.
It is generally preferable to use group IDs (supplemental or fsGroup) to gain access to persistent storage versus using user IDs.
User IDs can be defined in the container image or in the pod definition. In the pod definition, a single user ID can be defined globally to all containers, or specific to individual containers (or both). A user ID is supplied as shown in the pod definition fragment below:
spec: containers: - name: ... securityContext: runAsUser: 1000100001
ID 1000100001 in the above is container-specific and matches the owner ID on the export. If the NFS export’s owner ID was 54321, then that number would be used in the pod definition. Specifying securityContext
outside of the container definition makes the ID global to all containers in the pod.
Similar to group IDs, user IDs may be validated according to policies set in the SCC and/or project. If the SCC’s runAsUser
strategy is set to RunAsAny, then any user ID defined in the pod definition or in the image is allowed.
This means even a UID of 0 (root) is allowed.
If, instead, the runAsUser
strategy is set to MustRunAsRange, then a supplied user ID will be validated against a range of allowed IDs. If the pod supplies no user ID, then the default ID is set to the minimum value of the range of allowable user IDs.
Returning to the earlier NFS example, the container needs its UID set to 1000100001, which is shown in the pod fragment above. Assuming the default project and the restricted SCC, the pod’s requested user ID of 1000100001 will not be allowed, and therefore the pod will fail. The pod fails because:
- it requests 1000100001 as its user ID,
-
all available SCCs use MustRunAsRange for their
runAsUser
strategy, so UID range checking is required, and - 1000100001 is not included in the SCC or in the project’s user ID range.
To remedy this situation, a new SCC can be created with the appropriate user ID range. A new project could also be created with the appropriate user ID range defined. There are also other, less-preferred options:
- The restricted SCC could be modified to include 1000100001 within its minimum and maximum user ID range. This is not recommended as you should avoid modifying the predefined SCCs if possible.
-
The restricted SCC could be modified to use RunAsAny for the
runAsUser
value, thus eliminating ID range checking. This is strongly not recommended, as containers could run as root. - The default project’s UID range could be changed to allow a user ID of 1000100001. This is not generally advisable because only a single range of user IDs can be specified, and thus other pods may not run if the range is altered.
User IDs and Custom SCCs
It is good practice to avoid modifying the predefined SCCs if possible. The preferred approach is to create a custom SCC that better fits an organization’s security needs, or create a new project that supports the desired user IDs.
To remedy the situation in the previous example, a custom SCC can be created such that:
- a minimum and maximum user ID is defined,
- UID range checking is still enforced, and
- the UID of 1000100001 is allowed.
For example:
$ oc export scc nfs-scc allowHostDirVolumePlugin: false 1 ... kind: SecurityContextConstraints metadata: ... name: nfs-scc 2 priority: 9 3 requiredDropCapabilities: null runAsUser: type: MustRunAsRange 4 uidRangeMax: 1000100001 5 uidRangeMin: 1000100001 ...
- 1
- The
allowXX
bools are the same as for the restricted SCC. - 2
- The name of this new SCC is nfs-scc.
- 3
- Numerically larger numbers have greater priority. Nil or omitted is the lowest priority. Higher priority SCCs sort before lower priority SCCs, and thus have a better chance of matching a new pod.
- 4
- The
runAsUser
strategy is set to MustRunAsRange, which means UID range checking is enforced. - 5
- The UID range is 1000100001 through 1000100001 (a range of one value).
Now, with runAsUser: 1000100001
shown in the previous pod definition fragment, the pod matches the new nfs-scc and is able to run with a UID of 1000100001.
21.13.6. SELinux Options
All predefined SCCs, except for the privileged SCC, set the seLinuxContext
to MustRunAs. So the SCCs most likely to match a pod’s requirements will force the pod to use an SELinux policy. The SELinux policy used by the pod can be defined in the pod itself, in the image, in the SCC, or in the project (which provides the default).
SELinux labels can be defined in a pod’s securityContext.seLinuxOptions
section, and supports user
, role
, type
, and level
:
Level and MCS label are used interchangeably in this topic.
... securityContext: 1 seLinuxOptions: level: "s0:c123,c456" 2 ...
Here are fragments from an SCC and from the default project:
$ oc export scc scc-name ... seLinuxContext: type: MustRunAs 1 # oc export project default ... metadata: annotations: openshift.io/sa.scc.mcs: s0:c1,c0 2 ...
All predefined SCCs, except for the privileged SCC, set the seLinuxContext
to MustRunAs. This forces pods to use MCS labels, which can be defined in the pod definition, the image, or provided as a default.
The SCC determines whether or not to require an SELinux label and can provide a default label. If the seLinuxContext
strategy is set to MustRunAs and the pod (or image) does not define a label, OpenShift Container Platform defaults to a label chosen from the SCC itself or from the project.
If seLinuxContext
is set to RunAsAny, then no default labels are provided, and the container determines the final label. In the case of Docker, the container will use a unique MCS label, which will not likely match the labeling on existing storage mounts. Volumes which support SELinux management will be relabeled so that they are accessible by the specified label and, depending on how exclusionary the label is, only that label.
This means two things for unprivileged containers:
-
The volume will be given a
type
which is accessible by unprivileged containers. Thistype
is usually svirt_sandbox_file_t. -
If a
level
is specified, the volume will be labeled with the given MCS label.
For a volume to be accessible by a pod, the pod must have both categories of the volume. So a pod with s0:c1,c2 will be able to access a volume with s0:c1,c2. A volume with s0 will be accessible by all pods.
If pods fail authorization, or if the storage mount is failing due to permissions errors, then there is a possibility that SELinux enforcement is interfering. One way to check for this is to run:
# ausearch -m avc --start recent
This examines the log file for AVC (Access Vector Cache) errors.
21.14. Selector-Label Volume Binding
21.14.1. Overview
This guide provides the steps necessary to enable binding of persistent volume claims (PVCs) to persistent volumes (PVs) via selector and label attributes. By implementing selectors and labels, regular users are able to target provisioned storage by identifiers defined by a cluster administrator.
21.14.2. Motivation
In cases of statically provisioned storage, developers seeking persistent storage are required to know a handful identifying attributes of a PV in order to deploy and bind a PVC. This creates several problematic situations. Regular users might have to contact a cluster administrator to either deploy the PVC or provide the PV values. PV attributes alone do not convey the intended use of the storage volumes, nor do they provide methods by which volumes can be grouped.
Selector and label attributes can be used to abstract away PV details from the user while providing cluster administrators a way of identifying volumes by a descriptive and customizable tag. Through the selector-label method of binding, users are only required to know which labels are defined by the administrator.
The selector-label feature is currently only available for statically provisioned storage and is currently not implemented for storage provisioned dynamically.
21.14.3. Deployment
This section reviews how to define and deploy PVCs.
21.14.3.1. Prerequisites
- A running OpenShift Container Platform 3.3+ cluster
- A volume provided by a supported storage provider
- A user with a cluster-admin role binding
21.14.3.2. Define the Persistent Volume and Claim
As the cluser-admin user, define the PV. For this example, we will be using a GlusterFS volume. See the appropriate storage provider for your provider’s configuration.
Example 21.19. Persistent Volume with Labels
apiVersion: v1 kind: PersistentVolume metadata: name: gluster-volume labels: 1 volume-type: ssd aws-availability-zone: us-east-1 spec: capacity: storage: 2Gi accessModes: - ReadWriteMany glusterfs: endpoints: glusterfs-cluster path: myVol1 readOnly: false persistentVolumeReclaimPolicy: Recycle
- 1
- A PVC whose selectors match all of a PV’s labels will be bound, assuming a PV is available.
Define the PVC:
Example 21.20. Persistent Volume Claim with Selectors
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gluster-claim spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi selector: 1 matchLabels: 2 volume-type: ssd aws-availability-zone: us-east-1
21.14.3.3. Deploy the Persistent Volume and Claim
As the cluster-admin user, create the persistent volume:
Example 21.21. Create the Persistent Volume
# oc create -f gluster-pv.yaml persistentVolume "gluster-volume" created # oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE gluster-volume map[] 2147483648 RWX Available 2s
Once the PV is created, any user whose selectors match all its labels can create their PVC.
Example 21.22. Create the Persistent Volume Claim
# oc create -f gluster-pvc.yaml persistentVolumeClaim "gluster-claim" created # oc get pvc NAME LABELS STATUS VOLUME gluster-claim Bound gluster-volume
21.15. Enabling Controller-managed Attachment and Detachment
21.15.1. Overview
As of OpenShift Container Platform 3.4, administrators can enable the controller running on the cluster’s master to manage volume attach and detach operations on behalf of a set of nodes, as opposed to letting them manage their own volume attach and detach operations.
Enabling controller-managed attachment and detachment has the following benefits:
- If a node is lost, volumes that were attached to it can be detached by the controller and reattached elsewhere.
- Credentials for attaching and detaching do not need to be made present on every node, improving security.
As of OpenShift Container Platform 3.6, controller-managed attachment and detachment is the default setting.
21.15.2. Determining What Is Managing Attachment and Detachment
If a node has set the annotation volumes.kubernetes.io/controller-managed-attach-detach
on itself, then its attach and detach operations are being managed by the controller. The controller will automatically inspect all nodes for this annotation and act according to whether it is present or not. Therefore, you may inspect the node for this annotation to determine if it has enabled controller-managed attach and detach.
To further ensure that the node is opting for controller-managed attachment and detachment, its logs can be searched for the following line:
Setting node annotation to enable volume controller attach/detach
If the above line is not found, the logs should instead contain:
Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
To check from the controller’s end that it is managing a particular node’s attach and detach operations, the logging level must first be set to at least 4
. Then, the following line should be found:
processVolumesInUse for node <node_hostname>
For information on how to view logs and configure logging levels, see Configuring Logging Levels.
21.15.3. Configuring Nodes to Enable Controller-managed Attachment and Detachment
Enabling controller-managed attachment and detachment is done by configuring individual nodes to opt in and disable their own node-level attachment and detachment management. See Node Configuration Files for information on what node configuration file to edit and add the following:
kubeletArguments: enable-controller-attach-detach: - "true"
Once a node is configured, it must be restarted for the setting to take effect.