Este contenido no está disponible en el idioma seleccionado.
Storage
Configuring and managing storage in OpenShift Container Platform
Abstract
Chapter 1. OpenShift Container Platform storage overview Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform supports multiple types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster.
1.1. Glossary of common terms for OpenShift Container Platform storage Copiar enlaceEnlace copiado en el portapapeles!
This glossary defines common terms that are used in the storage content.
- Access modes
Volume access modes describe volume capabilities. You can use access modes to match persistent volume claim (PVC) and persistent volume (PV). The following are the examples of access modes:
- ReadWriteOnce (RWO)
- ReadOnlyMany (ROX)
- ReadWriteMany (RWX)
- ReadWriteOncePod (RWOP)
- Cinder
- The Block Storage service for Red Hat OpenStack Platform (RHOSP) which manages the administration, security, and scheduling of all volumes.
- Config map
-
A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type
ConfigMap. Applications running in a pod can use this data. - Container Storage Interface (CSI)
- An API specification for the management of container storage across different container orchestration (CO) systems.
- Dynamic Provisioning
- The framework allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision persistent storage.
- Ephemeral storage
- Pods and containers can require temporary or transient local storage for their operation. The lifetime of this ephemeral storage does not extend beyond the life of the individual pod, and this ephemeral storage cannot be shared across pods.
- Fiber channel
- A networking technology that is used to transfer data among data centers, computer servers, switches and storage.
- FlexVolume
- FlexVolume is an out-of-tree plugin interface that uses an exec-based model to interface with storage drivers. You must install the FlexVolume driver binaries in a pre-defined volume plugin path on each node and in some cases the control plane nodes.
- fsGroup
- The fsGroup defines a file system group ID of a pod.
- iSCSI
- Internet Small Computer Systems Interface (iSCSI) is an Internet Protocol-based storage networking standard for linking data storage facilities. An iSCSI volume allows an existing iSCSI (SCSI over IP) volume to be mounted into your Pod.
- hostPath
- A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node’s filesystem into your pod.
- KMS key
- The Key Management Service (KMS) helps you achieve the required level of encryption of your data across different services. you can use the KMS key to encrypt, decrypt, and re-encrypt data.
- Local volumes
- A local volume represents a mounted local storage device such as a disk, partition or directory.
- NFS
- A Network File System (NFS) that allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. This enables system administrators to consolidate resources onto centralized servers on the network.
- OpenShift Data Foundation
- A provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds
- Persistent storage
- Pods and containers can require permanent storage for their operation. OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use PVC to request PV resources without having specific knowledge of the underlying storage infrastructure.
- Persistent volumes (PV)
- OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use PVC to request PV resources without having specific knowledge of the underlying storage infrastructure.
- Persistent volume claims (PVCs)
- You can use a PVC to mount a PersistentVolume into a Pod. You can access the storage without knowing the details of the cloud environment.
- Pod
- One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed.
- Reclaim policy
-
A policy that tells the cluster what to do with the volume after it is released. A volume’s reclaim policy can be
Retain,Recycle, orDelete. - Role-based access control (RBAC)
- Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization.
- Stateless applications
- A stateless application is an application program that does not save client data generated in one session for use in the next session with that client.
- Stateful applications
-
A stateful application is an application program that saves data to persistent disk storage. A server, client, and applications can use a persistent disk storage. You can use the
Statefulsetobject in OpenShift Container Platform to manage the deployment and scaling of a set of Pods, and provides guarantee about the ordering and uniqueness of these Pods. - Static provisioning
- A cluster administrator creates a number of PVs. PVs contain the details of storage. PVs exist in the Kubernetes API and are available for consumption.
- Storage
- OpenShift Container Platform supports many types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster.
- Storage class
- A storage class provides a way for administrators to describe the classes of storage they offer. Different classes might map to quality of service levels, backup policies, arbitrary policies determined by the cluster administrators.
- VMware vSphere’s Virtual Machine Disk (VMDK) volumes
- Virtual Machine Disk (VMDK) is a file format that describes containers for virtual hard disk drives that is used in virtual machines.
1.2. Storage types Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform storage is broadly classified into two categories, namely ephemeral storage and persistent storage.
1.2.1. Ephemeral storage Copiar enlaceEnlace copiado en el portapapeles!
Pods and containers are ephemeral or transient in nature and designed for stateless applications. Ephemeral storage allows administrators and developers to better manage the local storage for some of their operations. For more information about ephemeral storage overview, types, and management, see Understanding ephemeral storage.
1.2.2. Persistent storage Copiar enlaceEnlace copiado en el portapapeles!
Stateful applications deployed in containers require persistent storage. OpenShift Container Platform uses a pre-provisioned storage framework called persistent volumes (PV) to allow cluster administrators to provision persistent storage. The data inside these volumes can exist beyond the lifecycle of an individual pod. Developers can use persistent volume claims (PVCs) to request storage requirements. For more information about persistent storage overview, configuration, and lifecycle, see Understanding persistent storage.
1.3. Container Storage Interface (CSI) Copiar enlaceEnlace copiado en el portapapeles!
CSI is an API specification for the management of container storage across different container orchestration (CO) systems. You can manage the storage volumes within the container native environments, without having specific knowledge of the underlying storage infrastructure. With the CSI, storage works uniformly across different container orchestration systems, regardless of the storage vendors you are using. For more information about CSI, see Using Container Storage Interface (CSI).
1.4. Dynamic Provisioning Copiar enlaceEnlace copiado en el portapapeles!
Dynamic Provisioning allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision storage. For more information about dynamic provisioning, see Dynamic provisioning.
Chapter 2. Understanding ephemeral storage Copiar enlaceEnlace copiado en el portapapeles!
2.1. Overview Copiar enlaceEnlace copiado en el portapapeles!
In addition to persistent storage, pods and containers can require ephemeral or transient local storage for their operation. The lifetime of this ephemeral storage does not extend beyond the life of the individual pod, and this ephemeral storage cannot be shared across pods.
Pods use ephemeral local storage for scratch space, caching, and logs. Issues related to the lack of local storage accounting and isolation include the following:
- Pods cannot detect how much local storage is available to them.
- Pods cannot request guaranteed local storage.
- Local storage is a best-effort resource.
- Pods can be evicted due to other pods filling the local storage, after which new pods are not admitted until sufficient storage is reclaimed.
Unlike persistent volumes, ephemeral storage is unstructured and the space is shared between all pods running on a node, in addition to other uses by the system, the container runtime, and OpenShift Container Platform. The ephemeral storage framework allows pods to specify their transient local storage needs. It also allows OpenShift Container Platform to schedule pods where appropriate, and to protect the node against excessive use of local storage.
While the ephemeral storage framework allows administrators and developers to better manage local storage, I/O throughput and latency are not directly effected.
2.2. Types of ephemeral storage Copiar enlaceEnlace copiado en el portapapeles!
Ephemeral local storage is always made available in the primary partition. There are two basic ways of creating the primary partition: root and runtime.
Root
This partition holds the kubelet root directory,
/var/lib/kubelet/
/var/log/
EmptyDir
Runtime
This is an optional partition that runtimes can use for overlay file systems. OpenShift Container Platform attempts to identify and provide shared access along with isolation to this partition. Container image layers and writable layers are stored here. If the runtime partition exists, the
root
2.3. Ephemeral storage management Copiar enlaceEnlace copiado en el portapapeles!
Cluster administrators can manage ephemeral storage within a project by setting quotas that define the limit ranges and number of requests for ephemeral storage across all pods in a non-terminal state. Developers can also set requests and limits on this compute resource at the pod and container level.
You can manage local ephemeral storage by specifying requests and limits. Each container in a pod can specify the following:
-
spec.containers[].resources.limits.ephemeral-storage -
spec.containers[].resources.requests.ephemeral-storage
Limits and requests for ephemeral storage are measured in byte quantities. You can express storage as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following quantities all represent approximately the same value: 128974848, 129e6, 129M, and 123Mi. The case of the suffixes is significant. If you specify 400m of ephemeral storage, this requests 0.4 bytes, rather than 400 mebibytes (400Mi) or 400 megabytes (400M), which was probably what was intended.
The following example shows a pod with two containers. Each container requests 2GiB of local ephemeral storage. Each container has a limit of 4GiB of local ephemeral storage. Therefore, the pod has a request of 4GiB of local ephemeral storage, and a limit of 8GiB of local ephemeral storage.
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: app
image: images.my-company.example/app:v4
resources:
requests:
ephemeral-storage: "2Gi"
limits:
ephemeral-storage: "4Gi"
volumeMounts:
- name: ephemeral
mountPath: "/tmp"
- name: log-aggregator
image: images.my-company.example/log-aggregator:v6
resources:
requests:
ephemeral-storage: "2Gi"
volumeMounts:
- name: ephemeral
mountPath: "/tmp"
volumes:
- name: ephemeral
emptyDir: {}
This setting in the pod spec affects how the scheduler makes a decision on scheduling pods, and also how kubelet evict pods. First of all, the scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity of the node. In this case, the pod can be assigned to a node only if its available ephemeral storage (allocatable resource) is more than 4GiB.
Secondly, at the container level, since the first container sets resource limit, kubelet eviction manager measures the disk usage of this container and evicts the pod if the storage usage of this container exceeds its limit (4GiB). At the pod level, kubelet works out an overall pod storage limit by adding up the limits of all the containers in that pod. In this case, the total storage usage at the pod level is the sum of the disk usage from all containers plus the pod’s
emptyDir
For information about defining quotas for projects, see Quota setting per project.
2.4. Monitoring ephemeral storage Copiar enlaceEnlace copiado en el portapapeles!
You can use
/bin/df
/var/lib/kubelet
/var/lib/containers
/var/lib/kubelet
df
/var/lib/containers
To show the human-readable values of used and available space in
/var/lib
$ df -h /var/lib
The output shows the ephemeral storage usage in
/var/lib
Example output
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 69G 32G 34G 49% /
Chapter 3. Understanding persistent storage Copiar enlaceEnlace copiado en el portapapeles!
3.1. Persistent storage overview Copiar enlaceEnlace copiado en el portapapeles!
Managing storage is a distinct problem from managing compute resources. OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use persistent volume claims (PVCs) to request PV resources without having specific knowledge of the underlying storage infrastructure.
PVCs are specific to a project, and are created and used by developers as a means to use a PV. PV resources on their own are not scoped to any single project; they can be shared across the entire OpenShift Container Platform cluster and claimed from any project. After a PV is bound to a PVC, that PV can not then be bound to additional PVCs. This has the effect of scoping a bound PV to a single namespace, that of the binding project.
PVs are defined by a
PersistentVolume
StorageClass
PVs are volume plugins like
Volumes
High availability of storage in the infrastructure is left to the underlying storage provider.
PVCs are defined by a
PersistentVolumeClaim
3.2. Lifecycle of a volume and claim Copiar enlaceEnlace copiado en el portapapeles!
PVs are resources in the cluster. PVCs are requests for those resources and also act as claim checks to the resource. The interaction between PVs and PVCs have the following lifecycle.
3.2.1. Provision storage Copiar enlaceEnlace copiado en el portapapeles!
In response to requests from a developer defined in a PVC, a cluster administrator configures one or more dynamic provisioners that provision storage and a matching PV.
Alternatively, a cluster administrator can create a number of PVs in advance that carry the details of the real storage that is available for use. PVs exist in the API and are available for use.
3.2.2. Bind claims Copiar enlaceEnlace copiado en el portapapeles!
When you create a PVC, you request a specific amount of storage, specify the required access mode, and create a storage class to describe and classify the storage. The control loop in the master watches for new PVCs and binds the new PVC to an appropriate PV. If an appropriate PV does not exist, a provisioner for the storage class creates one.
The size of all PVs might exceed your PVC size. This is especially true with manually provisioned PVs. To minimize the excess, OpenShift Container Platform binds to the smallest PV that matches all other criteria.
Claims remain unbound indefinitely if a matching volume does not exist or can not be created with any available provisioner servicing a storage class. Claims are bound as matching volumes become available. For example, a cluster with many manually provisioned 50Gi volumes would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster.
3.2.3. Use pods and claimed PVs Copiar enlaceEnlace copiado en el portapapeles!
Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for a pod. For those volumes that support multiple access modes, you must specify which mode applies when you use the claim as a volume in a pod.
Once you have a claim and that claim is bound, the bound PV belongs to you for as long as you need it. You can schedule pods and access claimed PVs by including
persistentVolumeClaim
If you attach persistent volumes that have high file counts to pods, those pods can fail or can take a long time to start. For more information, see When using Persistent Volumes with high file counts in OpenShift, why do pods fail to start or take an excessive amount of time to achieve "Ready" state?.
3.2.4. Storage Object in Use Protection Copiar enlaceEnlace copiado en el portapapeles!
The Storage Object in Use Protection feature ensures that PVCs in active use by a pod and PVs that are bound to PVCs are not removed from the system, as this can result in data loss.
Storage Object in Use Protection is enabled by default.
A PVC is in active use by a pod when a
Pod
If a user deletes a PVC that is in active use by a pod, the PVC is not removed immediately. PVC removal is postponed until the PVC is no longer actively used by any pods. Also, if a cluster admin deletes a PV that is bound to a PVC, the PV is not removed immediately. PV removal is postponed until the PV is no longer bound to a PVC.
3.2.5. Release a persistent volume Copiar enlaceEnlace copiado en el portapapeles!
When you are finished with a volume, you can delete the PVC object from the API, which allows reclamation of the resource. The volume is considered released when the claim is deleted, but it is not yet available for another claim. The previous claimant’s data remains on the volume and must be handled according to policy.
3.2.6. Reclaim policy for persistent volumes Copiar enlaceEnlace copiado en el portapapeles!
The reclaim policy of a persistent volume tells the cluster what to do with the volume after it is released. A volume’s reclaim policy can be
Retain
Recycle
Delete
-
reclaim policy allows manual reclamation of the resource for those volume plugins that support it.
Retain -
reclaim policy recycles the volume back into the pool of unbound persistent volumes once it is released from its claim.
Recycle
The
Recycle
-
reclaim policy deletes both the
Deleteobject from OpenShift Container Platform and the associated storage asset in external infrastructure, such as AWS EBS or VMware vSphere.PersistentVolume
Dynamically provisioned volumes are always deleted.
3.2.7. Reclaiming a persistent volume manually Copiar enlaceEnlace copiado en el portapapeles!
When a persistent volume claim (PVC) is deleted, the persistent volume (PV) still exists and is considered "released". However, the PV is not yet available for another claim because the data of the previous claimant remains on the volume.
Procedure
To manually reclaim the PV as a cluster administrator:
Delete the PV.
$ oc delete pv <pv-name>The associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume, still exists after the PV is deleted.
- Clean up the data on the associated storage asset.
- Delete the associated storage asset. Alternately, to reuse the same storage asset, create a new PV with the storage asset definition.
The reclaimed PV is now available for use by another PVC.
3.2.8. Changing the reclaim policy of a persistent volume Copiar enlaceEnlace copiado en el portapapeles!
To change the reclaim policy of a persistent volume:
List the persistent volumes in your cluster:
$ oc get pvExample output
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3sChoose one of your persistent volumes and change its reclaim policy:
$ oc patch pv <your-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'Verify that your chosen persistent volume has the right policy:
$ oc get pvExample output
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 3sIn the preceding output, the volume bound to claim
now has adefault/claim3reclaim policy. The volume will not be automatically deleted when a user deletes claimRetain.default/claim3
3.3. Persistent volumes Copiar enlaceEnlace copiado en el portapapeles!
Each PV contains a
spec
status
PersistentVolume object definition example
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
...
status:
...
3.3.1. Types of PVs Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform supports the following persistent volume plugins:
- AliCloud Disk
- AWS Elastic Block Store (EBS)
- AWS Elastic File Store (EFS)
- Azure Disk
- Azure File
- Cinder
- Fibre Channel
- GCP Persistent Disk
- GCP Filestore
- IBM VPC Block
- HostPath
- iSCSI
- Local volume
- NFS
- OpenStack Manila
- Red Hat OpenShift Data Foundation
- VMware vSphere
3.3.2. Capacity Copiar enlaceEnlace copiado en el portapapeles!
Generally, a persistent volume (PV) has a specific storage capacity. This is set by using the
capacity
Currently, storage capacity is the only resource that can be set or requested. Future attributes may include IOPS, throughput, and so on.
3.3.3. Access modes Copiar enlaceEnlace copiado en el portapapeles!
A persistent volume can be mounted on a host in any way supported by the resource provider. Providers have different capabilities and each PV’s access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read-write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV’s capabilities.
Claims are matched to volumes with similar access modes. The only two matching criteria are access modes and size. A claim’s access modes represent a request. Therefore, you might be granted more, but never less. For example, if a claim requests RWO, but the only volume available is an NFS PV (RWO+ROX+RWX), the claim would then match NFS because it supports RWO.
Direct matches are always attempted first. The volume’s modes must match or contain more modes than you requested. The size must be greater than or equal to what is expected. If two types of volumes, such as NFS and iSCSI, have the same set of access modes, either of them can match a claim with those modes. There is no ordering between types of volumes and no way to choose one type over another.
All volumes with the same modes are grouped, and then sorted by size, smallest to largest. The binder gets the group with matching modes and iterates over each, in size order, until one size matches.
The following table lists the access modes:
| Access Mode | CLI abbreviation | Description |
|---|---|---|
| ReadWriteOnce |
| The volume can be mounted as read-write by a single node. |
| ReadOnlyMany |
| The volume can be mounted as read-only by many nodes. |
| ReadWriteMany |
| The volume can be mounted as read-write by many nodes. |
Volume access modes are descriptors of volume capabilities. They are not enforced constraints. The storage provider is responsible for runtime errors resulting from invalid use of the resource.
For example, NFS offers
ReadWriteOnce
read-only
iSCSI and Fibre Channel volumes do not currently have any fencing mechanisms. You must ensure the volumes are only used by one node at a time. In certain situations, such as draining a node, the volumes can be used simultaneously by two nodes. Before draining the node, first ensure the pods that use these volumes are deleted.
| Volume plugin | ReadWriteOnce [1] | ReadOnlyMany | ReadWriteMany |
|---|---|---|---|
| AliCloud Disk |
✅ |
- |
- |
| AWS EBS [2] |
✅ |
- |
- |
| AWS EFS |
✅ |
✅ |
✅ |
| Azure File |
✅ |
✅ |
✅ |
| Azure Disk |
✅ |
- |
- |
| Cinder |
✅ |
- |
- |
| Fibre Channel |
✅ |
✅ |
✅ [3] |
| GCP Persistent Disk |
✅ |
- |
- |
| GCP Filestore |
✅ |
✅ |
✅ |
| HostPath |
✅ |
- |
- |
| IBM VPC Disk |
✅ |
- |
- |
| iSCSI |
✅ |
✅ |
✅ [3] |
| Local volume |
✅ |
- |
- |
| LVM Storage |
✅ |
- |
- |
| NFS |
✅ |
✅ |
✅ |
| OpenStack Manila |
- |
- |
✅ |
| Red Hat OpenShift Data Foundation |
✅ |
- |
✅ |
| VMware vSphere |
✅ |
- |
✅ [4] |
- ReadWriteOnce (RWO) volumes cannot be mounted on multiple nodes. If a node fails, the system does not allow the attached RWO volume to be mounted on a new node because it is already assigned to the failed node. If you encounter a multi-attach error message as a result, force delete the pod on a shutdown or crashed node to avoid data loss in critical workloads, such as when dynamic persistent volumes are attached.
- Use a recreate deployment strategy for pods that rely on Amazon EBS.
- Only raw block volumes support the ReadWriteMany (RWX) access mode for Fibre Channel and iSCSI. For more information, see "Block volume support".
- If the underlying vSphere environment supports the vSAN file service, then the vSphere Container Storage Interface (CSI) Driver Operator installed by OpenShift Container Platform supports provisioning of ReadWriteMany (RWX) volumes. If you do not have vSAN file service configured, and you request RWX, the volume fails to get created and an error is logged. For more information, see "Using Container Storage Interface" → "VMware vSphere CSI Driver Operator".
3.3.4. Phase Copiar enlaceEnlace copiado en el portapapeles!
Volumes can be found in one of the following phases:
| Phase | Description |
|---|---|
| Available | A free resource not yet bound to a claim. |
| Bound | The volume is bound to a claim. |
| Released | The claim was deleted, but the resource is not yet reclaimed by the cluster. |
| Failed | The volume has failed its automatic reclamation. |
You can view the name of the PVC bound to the PV by running:
$ oc get pv <pv_claim>
3.3.4.1. Mount options Copiar enlaceEnlace copiado en el portapapeles!
You can specify mount options while mounting a PV by using the attribute
mountOptions
For example:
Mount options example
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
mountOptions:
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
persistentVolumeReclaimPolicy: Retain
claimRef:
name: claim1
namespace: default
- 1
- Specified mount options are used while mounting the PV to the disk.
The following PV types support mount options:
- AWS Elastic Block Store (EBS)
- Azure Disk
- Azure File
- Cinder
- GCE Persistent Disk
- iSCSI
- Local volume
- NFS
- Red Hat OpenShift Data Foundation (Ceph RBD only)
- VMware vSphere
Fibre Channel and HostPath PVs do not support mount options.
3.4. Persistent volume claims Copiar enlaceEnlace copiado en el portapapeles!
Each
PersistentVolumeClaim
spec
status
PersistentVolumeClaim object definition example
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: gold
status:
...
3.4.1. Storage classes Copiar enlaceEnlace copiado en el portapapeles!
Claims can optionally request a specific storage class by specifying the storage class’s name in the
storageClassName
storageClassName
The Cluster Storage Operator might install a default storage class depending on the platform in use. This storage class is owned and controlled by the Operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class.
The cluster administrator can also set a default storage class for all PVCs. When a default storage class is configured, the PVC must explicitly ask for
StorageClass
storageClassName
""
If more than one storage class is marked as default, a PVC can only be created if the
storageClassName
3.4.2. Access modes Copiar enlaceEnlace copiado en el portapapeles!
Claims use the same conventions as volumes when requesting storage with specific access modes.
3.4.3. Resources Copiar enlaceEnlace copiado en el portapapeles!
Claims, such as pods, can request specific quantities of a resource. In this case, the request is for storage. The same resource model applies to volumes and claims.
3.4.4. Claims as volumes Copiar enlaceEnlace copiado en el portapapeles!
Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the pod using the claim. The cluster finds the claim in the pod’s namespace and uses it to get the
PersistentVolume
Mount volume to the host and into the pod example
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: dockerfile/nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
- 1
- Path to mount the volume inside the pod.
- 2
- Name of the volume to mount. Do not mount to the container root,
/, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host/dev/ptsfiles. It is safe to mount the host by using/host. - 3
- Name of the PVC, that exists in the same namespace, to use.
3.5. Block volume support Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform can statically provision raw block volumes. These volumes do not have a file system, and can provide performance benefits for applications that either write to the disk directly or implement their own storage service.
Raw block volumes are provisioned by specifying
volumeMode: Block
Pods using raw block volumes must be configured to allow privileged containers.
The following table displays which volume plugins support block volumes.
| Volume Plugin | Manually provisioned | Dynamically provisioned | Fully supported |
|---|---|---|---|
| AliCloud Disk | ✅ | ✅ | ✅ |
| AWS EBS | ✅ | ✅ | ✅ |
| AWS EFS | |||
| Azure Disk | ✅ | ✅ | ✅ |
| Azure File | |||
| Cinder | ✅ | ✅ | ✅ |
| Fibre Channel | ✅ | ✅ | |
| GCP | ✅ | ✅ | ✅ |
| HostPath | |||
| IBM VPC Disk | ✅ | ✅ | ✅ |
| iSCSI | ✅ | ✅ | |
| Local volume | ✅ | ✅ | |
| LVM Storage | ✅ | ✅ | ✅ |
| NFS | |||
| Red Hat OpenShift Data Foundation | ✅ | ✅ | ✅ |
| VMware vSphere | ✅ | ✅ | ✅ |
Using any of the block volumes that can be provisioned manually, but are not provided as fully supported, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
3.5.1. Block volume examples Copiar enlaceEnlace copiado en el portapapeles!
PV example
apiVersion: v1
kind: PersistentVolume
metadata:
name: block-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
volumeMode: Block
persistentVolumeReclaimPolicy: Retain
fc:
targetWWNs: ["50060e801049cfd1"]
lun: 0
readOnly: false
- 1
volumeModemust be set toBlockto indicate that this PV is a raw block volume.
PVC example
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: block-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: 10Gi
- 1
volumeModemust be set toBlockto indicate that a raw block PVC is requested.
Pod specification example
apiVersion: v1
kind: Pod
metadata:
name: pod-with-block-volume
spec:
containers:
- name: fc-container
image: fedora:26
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
volumeDevices:
- name: data
devicePath: /dev/xvda
volumes:
- name: data
persistentVolumeClaim:
claimName: block-pvc
- 1
volumeDevices, instead ofvolumeMounts, is used for block devices. OnlyPersistentVolumeClaimsources can be used with raw block volumes.- 2
devicePath, instead ofmountPath, represents the path to the physical device where the raw block is mapped to the system.- 3
- The volume source must be of type
persistentVolumeClaimand must match the name of the PVC as expected.
| Value | Default |
|---|---|
| Filesystem | Yes |
| Block | No |
PV volumeMode | PVC volumeMode | Binding result |
|---|---|---|
| Filesystem | Filesystem | Bind |
| Unspecified | Unspecified | Bind |
| Filesystem | Unspecified | Bind |
| Unspecified | Filesystem | Bind |
| Block | Block | Bind |
| Unspecified | Block | No Bind |
| Block | Unspecified | No Bind |
| Filesystem | Block | No Bind |
| Block | Filesystem | No Bind |
Unspecified values result in the default value of
Filesystem
3.6. Using fsGroup to reduce pod timeouts Copiar enlaceEnlace copiado en el portapapeles!
If a storage volume contains many files (~1,000,000 or greater), you may experience pod timeouts.
This can occur because, by default, OpenShift Container Platform recursively changes ownership and permissions for the contents of each volume to match the
fsGroup
securityContext
fsGroupChangePolicy
securityContext
fsGroupChangePolicy
fsGroup
-
: Only change permissions and ownership if permission and ownership of root directory does not match with expected permissions of the volume. This can help shorten the time it takes to change ownership and permission of a volume to reduce pod timeouts.
OnRootMismatch -
: Always change permission and ownership of the volume when a volume is mounted.
Always
fsGroupChangePolicy example
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
fsGroupChangePolicy: "OnRootMismatch"
...
- 1
OnRootMismatchspecifies skipping recursive permission change, thus helping to avoid pod timeout problems.
The fsGroupChangePolicyfield has no effect on ephemeral volume types, such as secret, configMap, and emptydir.
Chapter 4. Configuring persistent storage Copiar enlaceEnlace copiado en el portapapeles!
4.1. Persistent storage using AWS Elastic Block Store Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform supports AWS Elastic Block Store volumes (EBS). You can provision your OpenShift Container Platform cluster with persistent storage by using Amazon EC2.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can dynamically provision AWS EBS volumes. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. You can define a KMS key to encrypt container-persistent volumes on AWS.
OpenShift Container Platform defaults to using an in-tree, or non-Container Storage Interface (CSI) plug-in to provision AWS EBS storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plug-ins are planned for migration to their equivalent CSI driver.
CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform.
High-availability of storage in the infrastructure is left to the underlying storage provider.
OpenShift Container Platform 4.12 and later provides automatic migration for the AWS Block in-tree volume plugin to its equivalent CSI driver.
CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
4.1.1. Creating the EBS storage class Copiar enlaceEnlace copiado en el portapapeles!
Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.
4.1.2. Creating the persistent volume claim Copiar enlaceEnlace copiado en el portapapeles!
Prerequisites
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Procedure
- In the OpenShift Container Platform console, click Storage → Persistent Volume Claims.
- In the persistent volume claims overview, click Create Persistent Volume Claim.
Define the desired options on the page that appears.
- Select the previously-created storage class from the drop-down menu.
- Enter a unique name for the storage claim.
- Select the access mode. This selection determines the read and write access for the storage claim.
- Define the size of the storage claim.
- Click Create to create the persistent volume claim and generate a persistent volume.
4.1.3. Volume format Copiar enlaceEnlace copiado en el portapapeles!
Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the
fsType
This verification enables you to use unformatted AWS volumes as persistent volumes, because OpenShift Container Platform formats them before the first use.
4.1.4. Maximum number of EBS volumes on a node Copiar enlaceEnlace copiado en el portapapeles!
By default, OpenShift Container Platform supports a maximum of 39 EBS volumes attached to one node. This limit is consistent with the AWS volume limits. The volume limit depends on the instance type.
As a cluster administrator, you must use either in-tree or Container Storage Interface (CSI) volumes and their respective storage classes, but never both volume types at the same time. The maximum attached EBS volume number is counted separately for in-tree and CSI volumes, which means you could have up to 39 EBS volumes of each type.
For information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins, see AWS Elastic Block Store CSI Driver Operator.
4.1.5. Encrypting container persistent volumes on AWS with a KMS key Copiar enlaceEnlace copiado en el portapapeles!
Defining a KMS key to encrypt container-persistent volumes on AWS is useful when you have explicit compliance and security guidelines when deploying to AWS.
Prerequisites
- Underlying infrastructure must contain storage.
- You must create a customer KMS key on AWS.
Procedure
Create a storage class:
$ cat << EOF | oc create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name>1 parameters: fsType: ext42 encrypted: "true" kmsKeyId: keyvalue3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer EOF- 1
- Specifies the name of the storage class.
- 2
- File system that is created on provisioned volumes.
- 3
- Specifies the full Amazon Resource Name (ARN) of the key to use when encrypting the container-persistent volume. If you do not provide any key, but the
encryptedfield is set totrue, then the default KMS key is used. See Finding the key ID and key ARN on AWS in the AWS documentation.
Create a persistent volume claim (PVC) with the storage class specifying the KMS key:
$ cat << EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: <storage-class-name> resources: requests: storage: 1Gi EOFCreate workload containers to consume the PVC:
$ cat << EOF | oc create -f - kind: Pod metadata: name: mypod spec: containers: - name: httpd image: quay.io/centos7/httpd-24-centos7 ports: - containerPort: 80 volumeMounts: - mountPath: /mnt/storage name: data volumes: - name: data persistentVolumeClaim: claimName: mypvc EOF
4.2. Persistent storage using Azure Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform supports Microsoft Azure Disk volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Azure Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.
OpenShift Container Platform 4.11 and later provides automatic migration for the Azure Disk in-tree volume plugin to its equivalent CSI driver.
CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
High availability of storage in the infrastructure is left to the underlying storage provider.
4.2.1. Creating the Azure storage class Copiar enlaceEnlace copiado en el portapapeles!
Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.
Procedure
- In the OpenShift Container Platform console, click Storage → Storage Classes.
- In the storage class overview, click Create Storage Class.
Define the desired options on the page that appears.
- Enter a name to reference the storage class.
- Enter an optional description.
- Select the reclaim policy.
Select
from the drop down list.kubernetes.io/azure-disk-
Enter the storage account type. This corresponds to your Azure storage account SKU tier. Valid options are ,
Premium_LRS,Standard_LRS, andStandardSSD_LRS.UltraSSD_LRS Enter the kind of account. Valid options are
,sharedanddedicated,.managedImportantRed Hat only supports the use of
in the storage class.kind: ManagedWith
andShared, Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created withDedicatedorSharedcannot be attached to OpenShift Container Platform nodes.Dedicated
-
Enter the storage account type. This corresponds to your Azure storage account SKU tier. Valid options are
- Enter additional parameters for the storage class as desired.
- Click Create to create the storage class.
4.2.2. Creating the persistent volume claim Copiar enlaceEnlace copiado en el portapapeles!
Prerequisites
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Procedure
- In the OpenShift Container Platform console, click Storage → Persistent Volume Claims.
- In the persistent volume claims overview, click Create Persistent Volume Claim.
Define the desired options on the page that appears.
- Select the previously-created storage class from the drop-down menu.
- Enter a unique name for the storage claim.
- Select the access mode. This selection determines the read and write access for the storage claim.
- Define the size of the storage claim.
- Click Create to create the persistent volume claim and generate a persistent volume.
4.2.3. Volume format Copiar enlaceEnlace copiado en el portapapeles!
Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the
fsType
This allows using unformatted Azure volumes as persistent volumes, because OpenShift Container Platform formats them before the first use.
4.2.4. Machine sets that deploy machines with ultra disks using PVCs Copiar enlaceEnlace copiado en el portapapeles!
You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads.
Both the in-tree plugin and CSI driver support using PVCs to enable ultra disks. You can also deploy machines with ultra disks as data disks without creating a PVC.
4.2.4.1. Creating machines with ultra disks by using machine sets Copiar enlaceEnlace copiado en el portapapeles!
You can deploy machines with ultra disks on Azure by editing your machine set YAML file.
Prerequisites
- Have an existing Microsoft Azure cluster.
Procedure
Copy an existing Azure
custom resource (CR) and edit it by running the following command:MachineSet$ oc edit machineset <machine_set_name>where
is the machine set that you want to provision machines with ultra disks.<machine_set_name>Add the following lines in the positions indicated:
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd1 providerSpec: value: ultraSSDCapability: Enabled2 Create a machine set using the updated configuration by running the following command:
$ oc create -f <machine_set_name>.yamlCreate a storage class that contains the following YAML definition:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc1 parameters: cachingMode: None diskIopsReadWrite: "2000"2 diskMbpsReadWrite: "320"3 kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com4 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer5 - 1
- Specify the name of the storage class. This procedure uses
ultra-disk-scfor this value. - 2
- Specify the number of IOPS for the storage class.
- 3
- Specify the throughput in MBps for the storage class.
- 4
- For Azure Kubernetes Service (AKS) version 1.21 or later, use
disk.csi.azure.com. For earlier versions of AKS, usekubernetes.io/azure-disk. - 5
- Optional: Specify this parameter to wait for the creation of the pod that will use the disk.
Create a persistent volume claim (PVC) to reference the
storage class that contains the following YAML definition:ultra-disk-scapiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk1 spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc2 resources: requests: storage: 4Gi3 Create a pod that contains the following YAML definition:
apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd1 containers: - name: nginx-ultra image: alpine:latest command: - "sleep" - "infinity" volumeMounts: - mountPath: "/mnt/azure" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk2
Verification
Validate that the machines are created by running the following command:
$ oc get machinesThe machines should be in the
state.RunningFor a machine that is running and has a node attached, validate the partition by running the following command:
$ oc debug node/<node_name> -- chroot /host lsblkIn this command,
starts a debugging shell on the nodeoc debug node/<node_name>and passes a command with<node_name>. The passed command--provides access to the underlying host OS binaries, andchroot /hostshows the block devices that are attached to the host OS machine.lsblk
Next steps
To use an ultra disk from within a pod, create a workload that uses the mount point. Create a YAML file similar to the following example:
apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - name: lun0p1 mountPath: "/tmp" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd
4.2.4.2. Troubleshooting resources for machine sets that enable ultra disks Copiar enlaceEnlace copiado en el portapapeles!
Use the information in this section to understand and recover from issues you might encounter.
4.2.4.2.1. Unable to mount a persistent volume claim backed by an ultra disk Copiar enlaceEnlace copiado en el portapapeles!
If there is an issue mounting a persistent volume claim backed by an ultra disk, the pod becomes stuck in the
ContainerCreating
For example, if the
additionalCapabilities.ultraSSDEnabled
StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.
To resolve this issue, describe the pod by running the following command:
$ oc -n <stuck_pod_namespace> describe pod <stuck_pod_name>
4.3. Persistent storage using Azure File Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform supports Microsoft Azure File volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can provision Azure File volumes dynamically.
Persistent volumes are not bound to a single project or namespace, and you can share them across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace, and can be requested by users for use in applications.
High availability of storage in the infrastructure is left to the underlying storage provider.
Azure File volumes use Server Message Block.
In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform.
Additional resources
4.3.1. Create the Azure File share persistent volume claim Copiar enlaceEnlace copiado en el portapapeles!
To create the persistent volume claim, you must first define a
Secret
PersistentVolume
Prerequisites
- An Azure File share exists.
- The credentials to access this share, specifically the storage account and key, are available.
Procedure
Create a
object that contains the Azure File credentials:Secret$ oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \1 --from-literal=azurestorageaccountkey=<storage-account-key>2 Create a
object that references thePersistentVolumeobject you created:SecretapiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001"1 spec: capacity: storage: "5Gi"2 accessModes: - "ReadWriteOnce" storageClassName: azure-file-sc azureFile: secretName: <secret-name>3 shareName: share-14 readOnly: falseCreate a
object that maps to the persistent volume you created:PersistentVolumeClaimapiVersion: "v1" kind: "PersistentVolumeClaim" metadata: name: "claim1"1 spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "5Gi"2 storageClassName: azure-file-sc3 volumeName: "pv0001"4 - 1
- The name of the persistent volume claim.
- 2
- The size of this persistent volume claim.
- 3
- The name of the storage class that is used to provision the persistent volume. Specify the storage class used in the
PersistentVolumedefinition. - 4
- The name of the existing
PersistentVolumeobject that references the Azure File share.
4.3.2. Mount the Azure File share in a pod Copiar enlaceEnlace copiado en el portapapeles!
After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod.
Prerequisites
- A persistent volume claim exists that is mapped to the underlying Azure File share.
Procedure
Create a pod that mounts the existing persistent volume claim:
apiVersion: v1 kind: Pod metadata: name: pod-name1 spec: containers: ... volumeMounts: - mountPath: "/data"2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim13 - 1
- The name of the pod.
- 2
- The path to mount the Azure File share inside the pod. Do not mount to the container root,
/, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host/dev/ptsfiles. It is safe to mount the host by using/host. - 3
- The name of the
PersistentVolumeClaimobject that has been previously created.
4.4. Persistent storage using Cinder Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform supports OpenStack Cinder. Some familiarity with Kubernetes and OpenStack is assumed.
Cinder volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.
OpenShift Container Platform 4.11 and later provides automatic migration for the Cinder in-tree volume plugin to its equivalent CSI driver.
CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
4.4.1. Manual provisioning with Cinder Copiar enlaceEnlace copiado en el portapapeles!
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Prerequisites
- OpenShift Container Platform configured for Red Hat OpenStack Platform (RHOSP)
- Cinder volume ID
4.4.1.1. Creating the persistent volume Copiar enlaceEnlace copiado en el portapapeles!
You must define your persistent volume (PV) in an object definition before creating it in OpenShift Container Platform:
Procedure
Save your object definition to a file.
cinder-persistentvolume.yaml
apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001"1 spec: capacity: storage: "5Gi"2 accessModes: - "ReadWriteOnce" cinder:3 fsType: "ext3"4 volumeID: "f37a03aa-6212-4c62-a805-9ce139fab180"5 - 1
- The name of the volume that is used by persistent volume claims or pods.
- 2
- The amount of storage allocated to this volume.
- 3
- Indicates
cinderfor Red Hat OpenStack Platform (RHOSP) Cinder volumes. - 4
- The file system that is created when the volume is mounted for the first time.
- 5
- The Cinder volume to use.
ImportantDo not change the
parameter value after the volume is formatted and provisioned. Changing this value can result in data loss and pod failure.fstypeCreate the object definition file you saved in the previous step.
$ oc create -f cinder-persistentvolume.yaml
4.4.1.2. Persistent volume formatting Copiar enlaceEnlace copiado en el portapapeles!
You can use unformatted Cinder volumes as PVs because OpenShift Container Platform formats them before the first use.
Before OpenShift Container Platform mounts the volume and passes it to a container, the system checks that it contains a file system as specified by the
fsType
4.4.1.3. Cinder volume security Copiar enlaceEnlace copiado en el portapapeles!
If you use Cinder PVs in your application, configure security for their deployment configurations.
Prerequisites
-
An SCC must be created that uses the appropriate strategy.
fsGroup
Procedure
Create a service account and add it to the SCC:
$ oc create serviceaccount <service_account>$ oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project>In your application’s deployment configuration, provide the service account name and
:securityContextapiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 11 selector:2 name: frontend template:3 metadata: labels:4 name: frontend5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account>6 securityContext: fsGroup: 77777 - 1
- The number of copies of the pod to run.
- 2
- The label selector of the pod to run.
- 3
- A template for the pod that the controller creates.
- 4
- The labels on the pod. They must include labels from the label selector.
- 5
- The maximum name length after expanding any parameters is 63 characters.
- 6
- Specifies the service account you created.
- 7
- Specifies an
fsGroupfor the pods.
4.5. Persistent storage using Fibre Channel Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform supports Fibre Channel, allowing you to provision your OpenShift Container Platform cluster with persistent storage using Fibre channel volumes. Some familiarity with Kubernetes and Fibre Channel is assumed.
Persistent storage using Fibre Channel is not supported on ARM architecture based infrastructures.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.
High availability of storage in the infrastructure is left to the underlying storage provider.
4.5.1. Provisioning Copiar enlaceEnlace copiado en el portapapeles!
To provision Fibre Channel volumes using the
PersistentVolume
-
The (array of Fibre Channel target’s World Wide Names).
targetWWNs - A valid LUN number.
- The filesystem type.
A persistent volume and a LUN have a one-to-one mapping between them.
Prerequisites
- Fibre Channel LUNs must exist in the underlying infrastructure.
PersistentVolume object definition
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
fc:
wwids: [3600508b400105e210000900000490000]
targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5']
lun: 2
fsType: ext4
- 1
- World wide identifiers (WWIDs). Either FC
wwidsor a combination of FCtargetWWNsandlunmust be set, but not both simultaneously. The FC WWID identifier is recommended over the WWNs target because it is guaranteed to be unique for every storage device, and independent of the path that is used to access the device. The WWID identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital Product Data (page 0x83) or Unit Serial Number (page 0x80). FC WWIDs are identified as/dev/disk/by-id/to reference the data on the disk, even if the path to the device changes and even when accessing the device from different systems. - 2 3
- Fibre Channel WWNs are identified as
/dev/disk/by-path/pci-<IDENTIFIER>-fc-0x<WWN>-lun-<LUN#>, but you do not need to provide any part of the path leading up to theWWN, including the0x, and anything after, including the-(hyphen).
Changing the value of the
fstype
4.5.1.1. Enforcing disk quotas Copiar enlaceEnlace copiado en el portapapeles!
Use LUN partitions to enforce disk quotas and size constraints. Each LUN is mapped to a single persistent volume, and unique names must be used for persistent volumes.
Enforcing quotas in this way allows the end user to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity.
4.5.1.2. Fibre Channel volume security Copiar enlaceEnlace copiado en el portapapeles!
Users request storage with a persistent volume claim. This claim only lives in the user’s namespace, and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume across a namespace causes the pod to fail.
Each Fibre Channel LUN must be accessible by all nodes in the cluster.
4.6. Persistent storage using FlexVolume Copiar enlaceEnlace copiado en el portapapeles!
FlexVolume is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
Out-of-tree Container Storage Interface (CSI) driver is the recommended way to write volume drivers in OpenShift Container Platform. Maintainers of FlexVolume drivers should implement a CSI driver and move users of FlexVolume to CSI. Users of FlexVolume should move their workloads to CSI driver.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
OpenShift Container Platform supports FlexVolume, an out-of-tree plugin that uses an executable model to interface with drivers.
To use storage from a back-end that does not have a built-in plugin, you can extend OpenShift Container Platform through FlexVolume drivers and provide persistent storage to applications.
Pods interact with FlexVolume drivers through the
flexvolume
4.6.1. About FlexVolume drivers Copiar enlaceEnlace copiado en el portapapeles!
A FlexVolume driver is an executable file that resides in a well-defined directory on all nodes in the cluster. OpenShift Container Platform calls the FlexVolume driver whenever it needs to mount or unmount a volume represented by a
PersistentVolume
flexVolume
Attach and detach operations are not supported in OpenShift Container Platform for FlexVolume.
4.6.2. FlexVolume driver example Copiar enlaceEnlace copiado en el portapapeles!
The first command-line argument of the FlexVolume driver is always an operation name. Other parameters are specific to each operation. Most of the operations take a JavaScript Object Notation (JSON) string as a parameter. This parameter is a complete JSON string, and not the name of a file with the JSON data.
The FlexVolume driver contains:
-
All .
flexVolume.options -
Some options from prefixed by
flexVolume, such askubernetes.io/andfsType.readwrite -
The content of the referenced secret, if specified, prefixed by .
kubernetes.io/secret/
FlexVolume driver JSON input example
{
"fooServer": "192.168.0.1:1234",
"fooVolumeName": "bar",
"kubernetes.io/fsType": "ext4",
"kubernetes.io/readwrite": "ro",
"kubernetes.io/secret/<key name>": "<key value>",
"kubernetes.io/secret/<another key name>": "<another key value>",
}
OpenShift Container Platform expects JSON data on standard output of the driver. When not specified, the output describes the result of the operation.
FlexVolume driver default output example
{
"status": "<Success/Failure/Not supported>",
"message": "<Reason for success/failure>"
}
Exit code of the driver should be
0
1
Operations should be idempotent, which means that the mounting of an already mounted volume should result in a successful operation.
4.6.3. Installing FlexVolume drivers Copiar enlaceEnlace copiado en el portapapeles!
FlexVolume drivers that are used to extend OpenShift Container Platform are executed only on the node. To implement FlexVolumes, a list of operations to call and the installation path are all that is required.
Prerequisites
FlexVolume drivers must implement these operations:
initInitializes the driver. It is called during initialization of all nodes.
- Arguments: none
- Executed on: node
- Expected output: default JSON
mountMounts a volume to directory. This can include anything that is necessary to mount the volume, including finding the device and then mounting the device.
-
Arguments:
<mount-dir><json> - Executed on: node
- Expected output: default JSON
-
Arguments:
unmountUnmounts a volume from a directory. This can include anything that is necessary to clean up the volume after unmounting.
-
Arguments:
<mount-dir> - Executed on: node
- Expected output: default JSON
-
Arguments:
mountdevice- Mounts a volume’s device to a directory where individual pods can then bind mount.
This call-out does not pass "secrets" specified in the FlexVolume spec. If your driver requires secrets, do not implement this call-out.
-
Arguments:
<mount-dir><json> - Executed on: node
Expected output: default JSON
unmountdevice- Unmounts a volume’s device from a directory.
-
Arguments:
<mount-dir> - Executed on: node
Expected output: default JSON
-
All other operations should return JSON with and exit code
{"status": "Not supported"}.1
-
All other operations should return JSON with
Procedure
To install the FlexVolume driver:
- Ensure that the executable file exists on all nodes in the cluster.
-
Place the executable file at the volume plugin path: .
/etc/kubernetes/kubelet-plugins/volume/exec/<vendor>~<driver>/<driver>
For example, to install the FlexVolume driver for the storage
foo
/etc/kubernetes/kubelet-plugins/volume/exec/openshift.com~foo/foo
4.6.4. Consuming storage using FlexVolume drivers Copiar enlaceEnlace copiado en el portapapeles!
Each
PersistentVolume
Procedure
-
Use the object to reference the installed storage.
PersistentVolume
Persistent volume object definition using FlexVolume drivers example
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
flexVolume:
driver: openshift.com/foo
fsType: "ext4"
secretRef: foo-secret
readOnly: true
options:
fooServer: 192.168.0.1:1234
fooVolumeName: bar
- 1
- The name of the volume. This is how it is identified through persistent volume claims or from pods. This name can be different from the name of the volume on back-end storage.
- 2
- The amount of storage allocated to this volume.
- 3
- The name of the driver. This field is mandatory.
- 4
- The file system that is present on the volume. This field is optional.
- 5
- The reference to a secret. Keys and values from this secret are provided to the FlexVolume driver on invocation. This field is optional.
- 6
- The read-only flag. This field is optional.
- 7
- The additional options for the FlexVolume driver. In addition to the flags specified by the user in the
optionsfield, the following flags are also passed to the executable:"fsType":"<FS type>", "readwrite":"<rw>", "secret/key1":"<secret1>" ... "secret/keyN":"<secretN>"
Secrets are passed only to mount or unmount call-outs.
4.7. Persistent storage using GCE Persistent Disk Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform supports GCE Persistent Disk volumes (gcePD). You can provision your OpenShift Container Platform cluster with persistent storage using GCE. Some familiarity with Kubernetes and GCE is assumed.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure.
GCE Persistent Disk volumes can be provisioned dynamically.
Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.
OpenShift Container Platform 4.12 and later provides automatic migration for the GCE Persist Disk in-tree volume plugin to its equivalent CSI driver.
CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes.
For more information about migration, see CSI automatic migration.
High availability of storage in the infrastructure is left to the underlying storage provider.
4.7.1. Creating the GCE storage class Copiar enlaceEnlace copiado en el portapapeles!
Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.
4.7.2. Creating the persistent volume claim Copiar enlaceEnlace copiado en el portapapeles!
Prerequisites
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Procedure
- In the OpenShift Container Platform console, click Storage → Persistent Volume Claims.
- In the persistent volume claims overview, click Create Persistent Volume Claim.
Define the desired options on the page that appears.
- Select the previously-created storage class from the drop-down menu.
- Enter a unique name for the storage claim.
- Select the access mode. This selection determines the read and write access for the storage claim.
- Define the size of the storage claim.
- Click Create to create the persistent volume claim and generate a persistent volume.
4.7.3. Volume format Copiar enlaceEnlace copiado en el portapapeles!
Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the
fsType
This verification enables you to use unformatted GCE volumes as persistent volumes, because OpenShift Container Platform formats them before the first use.
4.8. Persistent storage using iSCSI Copiar enlaceEnlace copiado en el portapapeles!
You can provision your OpenShift Container Platform cluster with persistent storage using iSCSI. Some familiarity with Kubernetes and iSCSI is assumed.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure.
High-availability of storage in the infrastructure is left to the underlying storage provider.
When you use iSCSI on Amazon Web Services, you must update the default security policy to include TCP traffic between nodes on the iSCSI ports. By default, they are ports
860
3260
Users must ensure that the iSCSI initiator is already configured on all OpenShift Container Platform nodes by installing the
iscsi-initiator-utils
/etc/iscsi/initiatorname.iscsi
iscsi-initiator-utils
For more information, see Managing Storage Devices.
4.8.1. Provisioning Copiar enlaceEnlace copiado en el portapapeles!
Verify that the storage exists in the underlying infrastructure before mounting it as a volume in OpenShift Container Platform. All that is required for the iSCSI is the iSCSI target portal, a valid iSCSI Qualified Name (IQN), a valid LUN number, the filesystem type, and the
PersistentVolume
PersistentVolume object definition
apiVersion: v1
kind: PersistentVolume
metadata:
name: iscsi-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
iscsi:
targetPortal: 10.16.154.81:3260
iqn: iqn.2014-12.example.server:storage.target00
lun: 0
fsType: 'ext4'
4.8.2. Enforcing disk quotas Copiar enlaceEnlace copiado en el portapapeles!
Use LUN partitions to enforce disk quotas and size constraints. Each LUN is one persistent volume. Kubernetes enforces unique names for persistent volumes.
Enforcing quotas in this way allows the end user to request persistent storage by a specific amount (for example,
10Gi
4.8.3. iSCSI volume security Copiar enlaceEnlace copiado en el portapapeles!
Users request storage with a
PersistentVolumeClaim
Each iSCSI LUN must be accessible by all nodes in the cluster.
4.8.3.1. Challenge Handshake Authentication Protocol (CHAP) configuration Copiar enlaceEnlace copiado en el portapapeles!
Optionally, OpenShift Container Platform can use CHAP to authenticate itself to iSCSI targets:
apiVersion: v1
kind: PersistentVolume
metadata:
name: iscsi-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
iscsi:
targetPortal: 10.0.0.1:3260
iqn: iqn.2016-04.test.com:storage.target00
lun: 0
fsType: ext4
chapAuthDiscovery: true
chapAuthSession: true
secretRef:
name: chap-secret
4.8.4. iSCSI multipathing Copiar enlaceEnlace copiado en el portapapeles!
For iSCSI-based storage, you can configure multiple paths by using the same IQN for more than one target portal IP address. Multipathing ensures access to the persistent volume when one or more of the components in a path fail.
To specify multi-paths in the pod specification, use the
portals
apiVersion: v1
kind: PersistentVolume
metadata:
name: iscsi-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
iscsi:
targetPortal: 10.0.0.1:3260
portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260']
iqn: iqn.2016-04.test.com:storage.target00
lun: 0
fsType: ext4
readOnly: false
- 1
- Add additional target portals using the
portalsfield.
4.8.5. iSCSI custom initiator IQN Copiar enlaceEnlace copiado en el portapapeles!
Configure the custom initiator iSCSI Qualified Name (IQN) if the iSCSI targets are restricted to certain IQNs, but the nodes that the iSCSI PVs are attached to are not guaranteed to have these IQNs.
To specify a custom initiator IQN, use
initiatorName
apiVersion: v1
kind: PersistentVolume
metadata:
name: iscsi-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
iscsi:
targetPortal: 10.0.0.1:3260
portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260']
iqn: iqn.2016-04.test.com:storage.target00
lun: 0
initiatorName: iqn.2016-04.test.com:custom.iqn
fsType: ext4
readOnly: false
- 1
- Specify the name of the initiator.
4.9. Persistent storage using NFS Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform clusters can be provisioned with persistent storage using NFS. Persistent volumes (PVs) and persistent volume claims (PVCs) provide a convenient method for sharing a volume across a project. While the NFS-specific information contained in a PV definition could also be defined directly in a
Pod
4.9.1. Provisioning Copiar enlaceEnlace copiado en el portapapeles!
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. To provision NFS volumes, a list of NFS servers and export paths are all that is required.
Procedure
Create an object definition for the PV:
apiVersion: v1 kind: PersistentVolume metadata: name: pv00011 spec: capacity: storage: 5Gi2 accessModes: - ReadWriteOnce3 nfs:4 path: /tmp5 server: 172.17.0.26 persistentVolumeReclaimPolicy: Retain7 - 1
- The name of the volume. This is the PV identity in various
oc <command> podcommands. - 2
- The amount of storage allocated to this volume.
- 3
- Though this appears to be related to controlling access to the volume, it is actually used similarly to labels and used to match a PVC to a PV. Currently, no access rules are enforced based on the
accessModes. - 4
- The volume type being used, in this case the
nfsplugin. - 5
- The path that is exported by the NFS server.
- 6
- The hostname or IP address of the NFS server.
- 7
- The reclaim policy for the PV. This defines what happens to a volume when released.
NoteEach NFS volume must be mountable by all schedulable nodes in the cluster.
Verify that the PV was created:
$ oc get pvExample output
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31sCreate a persistent volume claim that binds to the new PV:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce1 resources: requests: storage: 5Gi2 volumeName: pv0001 storageClassName: ""Verify that the persistent volume claim was created:
$ oc get pvcExample output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m
4.9.2. Enforcing disk quotas Copiar enlaceEnlace copiado en el portapapeles!
You can use disk partitions to enforce disk quotas and size constraints. Each partition can be its own export. Each export is one PV. OpenShift Container Platform enforces unique names for PVs, but the uniqueness of the NFS volume’s server and path is up to the administrator.
Enforcing quotas in this way allows the developer to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity.
4.9.3. NFS volume security Copiar enlaceEnlace copiado en el portapapeles!
This section covers NFS volume security, including matching permissions and SELinux considerations. The user is expected to understand the basics of POSIX permissions, process UIDs, supplemental groups, and SELinux.
Developers request NFS storage by referencing either a PVC by name or the NFS volume plugin directly in the
volumes
Pod
The
/etc/exports
As an example, if the target NFS directory appears on the NFS server as:
$ ls -lZ /opt/nfs -d
Example output
drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs
$ id nfsnobody
Example output
uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)
Then the container must match SELinux labels, and either run with a UID of
65534
nfsnobody
5555
The owner ID of
65534
root_squash
root
0
nfsnobody
65534
65534
4.9.3.1. Group IDs Copiar enlaceEnlace copiado en el portapapeles!
The recommended way to handle NFS access, assuming it is not an option to change permissions on the NFS export, is to use supplemental groups. Supplemental groups in OpenShift Container Platform are used for shared storage, of which NFS is an example. In contrast, block storage such as iSCSI uses the
fsGroup
fsGroup
securityContext
To gain access to persistent storage, it is generally preferable to use supplemental group IDs versus user IDs.
Because the group ID on the example target NFS directory is
5555
supplementalGroups
securityContext
spec:
containers:
- name:
...
securityContext:
supplementalGroups: [5555]
Assuming there are no custom SCCs that might satisfy the pod requirements, the pod likely matches the
restricted
supplementalGroups
RunAsAny
As a result, the above pod passes admissions and is launched. However, if group ID range checking is desired, a custom SCC is the preferred solution. A custom SCC can be created such that minimum and maximum group IDs are defined, group ID range checking is enforced, and a group ID of
5555
To use a custom SCC, you must first add it to the appropriate service account. For example, use the
default
Pod
4.9.3.2. User IDs Copiar enlaceEnlace copiado en el portapapeles!
User IDs can be defined in the container image or in the
Pod
It is generally preferable to use supplemental group IDs to gain access to persistent storage versus using user IDs.
In the example target NFS directory shown above, the container needs its UID set to
65534
Pod
spec:
containers:
- name:
...
securityContext:
runAsUser: 65534
Assuming that the project is
default
restricted
65534
-
It requests as its user ID.
65534 -
All SCCs available to the pod are examined to see which SCC allows a user ID of . While all policies of the SCCs are checked, the focus here is on user ID.
65534 -
Because all available SCCs use for their
MustRunAsRangestrategy, UID range checking is required.runAsUser -
is not included in the SCC or project’s user ID range.
65534
It is generally considered a good practice not to modify the predefined SCCs. The preferred way to fix this situation is to create a custom SCC A custom SCC can be created such that minimum and maximum user IDs are defined, UID range checking is still enforced, and the UID of
65534
To use a custom SCC, you must first add it to the appropriate service account. For example, use the
default
Pod
4.9.3.3. SELinux Copiar enlaceEnlace copiado en el portapapeles!
Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems are configured to use SELinux on remote NFS servers by default.
For non-RHEL and non-RHCOS systems, SELinux does not allow writing from a pod to a remote NFS server. The NFS volume mounts correctly but it is read-only. You will need to enable the correct SELinux permissions by using the following procedure.
Prerequisites
-
The package must be installed. This package provides the
container-selinuxSELinux boolean.virt_use_nfs
Procedure
Enable the
boolean using the following command. Thevirt_use_nfsoption makes this boolean persistent across reboots.-P# setsebool -P virt_use_nfs 1
4.9.3.4. Export settings Copiar enlaceEnlace copiado en el portapapeles!
To enable arbitrary container users to read and write the volume, each exported volume on the NFS server should conform to the following conditions:
Every export must be exported using the following format:
/<example_fs> *(rw,root_squash)The firewall must be configured to allow traffic to the mount point.
For NFSv4, configure the default port
(nfs).2049NFSv4
# iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPTFor NFSv3, there are three ports to configure:
(nfs),2049(mountd), and20048(portmapper).111NFSv3
# iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT# iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT# iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT
-
The NFS export and directory must be set up so that they are accessible by the target pods. Either set the export to be owned by the container’s primary UID, or supply the pod group access using , as shown in the group IDs above.
supplementalGroups
4.9.4. Reclaiming resources Copiar enlaceEnlace copiado en el portapapeles!
NFS implements the OpenShift Container Platform
Recyclable
By default, PVs are set to
Retain
Once claim to a PVC is deleted, and the PV is released, the PV object should not be reused. Instead, a new PV should be created with the same basic volume details as the original.
For example, the administrator creates a PV named
nfs1
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs1
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.1.1
path: "/"
The user creates
PVC1
nfs1
PVC1
nfs1
nfs1
Released
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs2
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.1.1
path: "/"
Deleting the original PV and re-creating it with the same name is discouraged. Attempting to manually change the status of a PV from
Released
Available
4.9.5. Additional configuration and troubleshooting Copiar enlaceEnlace copiado en el portapapeles!
Depending on what version of NFS is being used and how it is configured, there may be additional configuration steps needed for proper export and security mapping. The following are some that may apply:
| NFSv4 mount incorrectly shows all files with ownership of
|
|
| Disabling ID mapping on NFSv4 |
|
4.10. Red Hat OpenShift Data Foundation Copiar enlaceEnlace copiado en el portapapeles!
Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. For more information, see the Red Hat OpenShift Data Foundation documentation.
OpenShift Data Foundation on top of Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization, which uses hyperconverged nodes that host virtual machines installed with OpenShift Container Platform, is not a supported configuration. For more information about supported platforms, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Guide.
4.11. Persistent storage using VMware vSphere volumes Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform allows use of VMware vSphere’s Virtual Machine Disk (VMDK) volumes. You can provision your OpenShift Container Platform cluster with persistent storage using VMware vSphere. Some familiarity with Kubernetes and VMware vSphere is assumed.
VMware vSphere volumes can be provisioned dynamically. OpenShift Container Platform creates the disk in vSphere and attaches this disk to the correct image.
OpenShift Container Platform provisions new volumes as independent persistent disks that can freely attach and detach the volume on any node in the cluster. Consequently, you cannot back up volumes that use snapshots, or restore volumes from snapshots. See Snapshot Limitations for more information.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure.
Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.
OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision vSphere storage.
In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform.
4.11.1. Dynamically provisioning VMware vSphere volumes Copiar enlaceEnlace copiado en el portapapeles!
Dynamically provisioning VMware vSphere volumes is the recommended method.
4.11.2. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- An OpenShift Container Platform cluster installed on a VMware vSphere version that meets the requirements for the components that you use. See Installing a cluster on vSphere for information about vSphere version support.
You can use either of the following procedures to dynamically provision these volumes using the default storage class.
4.11.2.1. Dynamically provisioning VMware vSphere volumes using the UI Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform installs a default storage class, named
thin
thin
Prerequisites
- Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Procedure
- In the OpenShift Container Platform console, click Storage → Persistent Volume Claims.
- In the persistent volume claims overview, click Create Persistent Volume Claim.
Define the required options on the resulting page.
-
Select the storage class.
thin - Enter a unique name for the storage claim.
- Select the access mode to determine the read and write access for the created storage claim.
- Define the size of the storage claim.
-
Select the
- Click Create to create the persistent volume claim and generate a persistent volume.
4.11.2.2. Dynamically provisioning VMware vSphere volumes using the CLI Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform installs a default StorageClass, named
thin
thin
Prerequisites
- Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Procedure (CLI)
You can define a VMware vSphere PersistentVolumeClaim by creating a file,
, with the following contents:pvc.yamlkind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc1 spec: accessModes: - ReadWriteOnce2 resources: requests: storage: 1Gi3 Enter the following command to create the
object from the file:PersistentVolumeClaim$ oc create -f pvc.yaml
4.11.3. Statically provisioning VMware vSphere volumes Copiar enlaceEnlace copiado en el portapapeles!
To statically provision VMware vSphere volumes you must create the virtual machine disks for reference by the persistent volume framework.
Prerequisites
- Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Procedure
Create the virtual machine disks. Virtual machine disks (VMDKs) must be created manually before statically provisioning VMware vSphere volumes. Use either of the following methods:
Create using
. Access ESX through Secure Shell (SSH) and then use following command to create a VMDK volume:vmkfstools$ vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdkCreate using
:vmware-diskmanager$ shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk
Create a persistent volume that references the VMDKs. Create a file,
, with thepv1.yamlobject definition:PersistentVolumeapiVersion: v1 kind: PersistentVolume metadata: name: pv11 spec: capacity: storage: 1Gi2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume:3 volumePath: "[datastore1] volumes/myDisk"4 fsType: ext45 - 1
- The name of the volume. This name is how it is identified by persistent volume claims or pods.
- 2
- The amount of storage allocated to this volume.
- 3
- The volume type used, with
vsphereVolumefor vSphere volumes. The label is used to mount a vSphere VMDK volume into pods. The contents of a volume are preserved when it is unmounted. The volume type supports VMFS and VSAN datastore. - 4
- The existing VMDK volume to use. If you used
vmkfstools, you must enclose the datastore name in square brackets,[], in the volume definition, as shown previously. - 5
- The file system type to mount. For example, ext4, xfs, or other file systems.
ImportantChanging the value of the fsType parameter after the volume is formatted and provisioned can result in data loss and pod failure.
Create the
object from the file:PersistentVolume$ oc create -f pv1.yamlCreate a persistent volume claim that maps to the persistent volume you created in the previous step. Create a file,
, with thepvc1.yamlobject definition:PersistentVolumeClaimapiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc11 spec: accessModes: - ReadWriteOnce2 resources: requests: storage: "1Gi"3 volumeName: pv14 Create the
object from the file:PersistentVolumeClaim$ oc create -f pvc1.yaml
4.11.3.1. Formatting VMware vSphere volumes Copiar enlaceEnlace copiado en el portapapeles!
Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system that is specified by the
fsType
PersistentVolume
Because OpenShift Container Platform formats them before the first use, you can use unformatted vSphere volumes as PVs.
Chapter 5. Persistent storage using local storage Copiar enlaceEnlace copiado en el portapapeles!
5.1. Local storage overview Copiar enlaceEnlace copiado en el portapapeles!
You can use any of the following solutions to provision local storage:
- HostPath Provisioner (HPP)
- Local Storage Operator (LSO)
- Logical Volume Manager (LVM) Storage
These solutions support provisioning only node-local storage. The workloads are bound to the nodes that provide the storage. If the node becomes unavailable, the workload also becomes unavailable. To maintain workload availability despite node failures, you must ensure storage data replication through active or passive replication mechanisms.
5.1.1. Overview of HostPath Provisioner functionality Copiar enlaceEnlace copiado en el portapapeles!
You can perform the following actions using HostPath Provisioner (HPP):
- Map the host filesystem paths to storage classes for provisioning local storage.
- Statically create storage classes to configure filesystem paths on a node for storage consumption.
- Statically provision Persistent Volumes (PVs) based on the storage class.
- Create workloads and PersistentVolumeClaims (PVCs) while being aware of the underlying storage topology.
HPP is available in upstream Kubernetes. However, it is not recommended to use HPP from upstream Kubernetes.
5.1.2. Overview of Local Storage Operator functionality Copiar enlaceEnlace copiado en el portapapeles!
You can perform the following actions using Local Storage Operator (LSO):
- Assign the storage devices (disks or partitions) to the storage classes without modifying the device configuration.
-
Statically provision PVs and storage classes by configuring the custom resource (CR).
LocalVolume - Create workloads and PVCs while being aware of the underlying storage topology.
LSO is developed and delivered by Red Hat.
5.1.3. Overview of LVM Storage functionality Copiar enlaceEnlace copiado en el portapapeles!
You can perform the following actions using Logical Volume Manager (LVM) Storage:
- Configure storage devices (disks or partitions) as lvm2 volume groups and expose the volume groups as storage classes.
- Create workloads and request storage by using PVCs without considering the node topology.
LVM Storage uses the TopoLVM CSI driver to dynamically allocate storage space to the nodes in the topology and provision PVs.
LVM Storage is developed and maintained by Red Hat. The CSI driver provided with LVM Storage is the upstream project "topolvm".
5.2. Persistent storage using local volumes Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform can be provisioned with persistent storage by using local volumes. Local persistent volumes allow you to access local storage devices, such as a disk or partition, by using the standard persistent volume claim interface.
Local volumes can be used without manually scheduling pods to nodes because the system is aware of the volume node constraints. However, local volumes are still subject to the availability of the underlying node and are not suitable for all applications.
Local volumes can only be used as a statically created persistent volume.
5.2.1. Installing the Local Storage Operator Copiar enlaceEnlace copiado en el portapapeles!
The Local Storage Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install and configure this Operator to enable local volumes in your cluster.
Prerequisites
- Access to the OpenShift Container Platform web console or command-line interface (CLI).
Procedure
Create the
project:openshift-local-storage$ oc adm new-project openshift-local-storageOptional: Allow local storage creation on infrastructure nodes.
You might want to use the Local Storage Operator to create volumes on infrastructure nodes in support of components such as logging and monitoring.
You must adjust the default node selector so that the Local Storage Operator includes the infrastructure nodes, and not just worker nodes.
To block the Local Storage Operator from inheriting the cluster-wide default selector, enter the following command:
$ oc annotate namespace openshift-local-storage openshift.io/node-selector=''Optional: Allow local storage to run on the management pool of CPUs in single-node deployment.
Use the Local Storage Operator in single-node deployments and allow the use of CPUs that belong to the
pool. Perform this step on single-node installations that use management workload partitioning.managementTo allow Local Storage Operator to run on the management CPU pool, run following commands:
$ oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management'
From the UI
To install the Local Storage Operator from the web console, follow these steps:
- Log in to the OpenShift Container Platform web console.
- Navigate to Operators → OperatorHub.
- Type Local Storage into the filter box to locate the Local Storage Operator.
- Click Install.
- On the Install Operator page, select A specific namespace on the cluster. Select openshift-local-storage from the drop-down menu.
- Adjust the values for Update Channel and Approval Strategy to the values that you want.
- Click Install.
Once finished, the Local Storage Operator will be listed in the Installed Operators section of the web console.
From the CLI
Install the Local Storage Operator from the CLI.
Create an object YAML file to define an Operator group and subscription for the Local Storage Operator, such as
:openshift-local-storage.yamlExample openshift-local-storage.yaml
apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: stable installPlanApproval: Automatic1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace- 1
- The user approval policy for an install plan.
Create the Local Storage Operator object by entering the following command:
$ oc apply -f openshift-local-storage.yamlAt this point, the Operator Lifecycle Manager (OLM) is now aware of the Local Storage Operator. A ClusterServiceVersion (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.
Verify local storage installation by checking that all pods and the Local Storage Operator have been created:
Check that all the required pods have been created:
$ oc -n openshift-local-storage get podsExample output
NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19mCheck the ClusterServiceVersion (CSV) YAML manifest to see that the Local Storage Operator is available in the
project:openshift-local-storage$ oc get csvs -n openshift-local-storageExample output
NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded
After all checks have passed, the Local Storage Operator is installed successfully.
5.2.2. Provisioning local volumes by using the Local Storage Operator Copiar enlaceEnlace copiado en el portapapeles!
Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by the Local Storage Operator. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource.
Prerequisites
- The Local Storage Operator is installed.
You have a local disk that meets the following conditions:
- It is attached to a node.
- It is not mounted.
- It does not contain partitions.
Procedure
Create the local volume resource. This resource must define the nodes and paths to the local volumes.
NoteDo not use different storage class names for the same device. Doing so will create multiple persistent volumes (PVs).
Example: Filesystem
apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage"1 spec: nodeSelector:2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: "local-sc"3 volumeMode: Filesystem4 fsType: xfs5 devicePaths:6 - /path/to/device7 - 1
- The namespace where the Local Storage Operator is installed.
- 2
- Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from
oc get node. If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. - 3
- The name of the storage class to use when creating persistent volume objects. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes.
- 4
- The volume mode, either
FilesystemorBlock, that defines the type of local volumes.NoteA raw block volume (
) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices.volumeMode: Block - 5
- The file system that is created when the local volume is mounted for the first time.
- 6
- The path containing a list of local storage devices to choose from.
- 7
- Replace this value with your actual local disks filepath to the
LocalVolumeresourceby-id, such as/dev/disk/by-id/wwn. PVs are created for these local disks when the provisioner is deployed successfully.NoteIf you are running OpenShift Container Platform with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the
command to add thevirsh edit <VM>definition.<serial>mydisk</serial>
Example: Block
apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage"1 spec: nodeSelector:2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: "local-sc"3 volumeMode: Block4 devicePaths:5 - /path/to/device6 - 1
- The namespace where the Local Storage Operator is installed.
- 2
- Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from
oc get node. If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. - 3
- The name of the storage class to use when creating persistent volume objects.
- 4
- The volume mode, either
FilesystemorBlock, that defines the type of local volumes. - 5
- The path containing a list of local storage devices to choose from.
- 6
- Replace this value with your actual local disks filepath to the
LocalVolumeresourceby-id, such asdev/disk/by-id/wwn. PVs are created for these local disks when the provisioner is deployed successfully.
NoteIf you are running OpenShift Container Platform with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the
command to add thevirsh edit <VM>definition.<serial>mydisk</serial>Create the local volume resource in your OpenShift Container Platform cluster. Specify the file you just created:
$ oc create -f <local-volume>.yamlVerify that the provisioner was created and that the corresponding daemon sets were created:
$ oc get all -n openshift-local-storageExample output
NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14mNote the desired and current number of daemon set processes. A desired count of
indicates that the label selectors were invalid.0Verify that the persistent volumes were created:
$ oc get pvExample output
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m
Editing the
LocalVolume
fsType
volumeMode
5.2.3. Provisioning local volumes without the Local Storage Operator Copiar enlaceEnlace copiado en el portapapeles!
Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by defining the persistent volume (PV) in an object definition. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource.
Manual provisioning of PVs includes the risk of potential data leaks across PV reuse when PVCs are deleted. The Local Storage Operator is recommended for automating the life cycle of devices when provisioning local PVs.
Prerequisites
- Local disks are attached to the OpenShift Container Platform nodes.
Procedure
Define the PV. Create a file, such as
orexample-pv-filesystem.yaml, with theexample-pv-block.yamlobject definition. This resource must define the nodes and paths to the local volumes.PersistentVolumeNoteDo not use different storage class names for the same device. Doing so will create multiple PVs.
example-pv-filesystem.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-sc2 local: path: /dev/xvdf3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node- 1
- The volume mode, either
FilesystemorBlock, that defines the type of PVs. - 2
- The name of the storage class to use when creating PV resources. Use a storage class that uniquely identifies this set of PVs.
- 3
- The path containing a list of local storage devices to choose from, or a directory. You can only specify a directory with
FilesystemvolumeMode.
NoteA raw block volume (
) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices.volumeMode: blockexample-pv-block.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-sc2 local: path: /dev/xvdf3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-nodeCreate the PV resource in your OpenShift Container Platform cluster. Specify the file you just created:
$ oc create -f <example-pv>.yamlVerify that the local PV was created:
$ oc get pvExample output
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-sc 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-sc 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-sc 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-sc 12h
5.2.4. Creating the local volume persistent volume claim Copiar enlaceEnlace copiado en el portapapeles!
Local volumes must be statically created as a persistent volume claim (PVC) to be accessed by the pod.
Prerequisites
- Persistent volumes have been created using the local volume provisioner.
Procedure
Create the PVC using the corresponding storage class:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem2 resources: requests: storage: 100Gi3 storageClassName: local-sc4 Create the PVC in the OpenShift Container Platform cluster, specifying the file you just created:
$ oc create -f <local-pvc>.yaml
5.2.5. Attach the local claim Copiar enlaceEnlace copiado en el portapapeles!
After a local volume has been mapped to a persistent volume claim it can be specified inside of a resource.
Prerequisites
- A persistent volume claim exists in the same namespace.
Procedure
Include the defined claim in the resource spec. The following example declares the persistent volume claim inside a pod:
apiVersion: v1 kind: Pod spec: # ... containers: volumeMounts: - name: local-disks1 mountPath: /data2 volumes: - name: local-disks persistentVolumeClaim: claimName: local-pvc-name3 # ...- 1
- The name of the volume to mount.
- 2
- The path inside the pod where the volume is mounted. Do not mount to the container root,
/, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host/dev/ptsfiles. It is safe to mount the host by using/host. - 3
- The name of the existing persistent volume claim to use.
Create the resource in the OpenShift Container Platform cluster, specifying the file you just created:
$ oc create -f <local-pod>.yaml
5.2.6. Automating discovery and provisioning for local storage devices Copiar enlaceEnlace copiado en el portapapeles!
The Local Storage Operator automates local storage discovery and provisioning. With this feature, you can simplify installation when dynamic provisioning is not available during deployment, such as with bare metal, VMware, or AWS store instances with attached devices.
Automatic discovery and provisioning is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Automatic discovery and provisioning is fully supported when used to deploy Red Hat OpenShift Data Foundation on-premise or with platform-agnostic deployment.
Use the following procedure to automatically discover local devices, and to automatically provision local volumes for selected devices.
Use the
LocalVolumeSet
LocalVolumeSet
LocalVolumeSet
Prerequisites
- You have cluster administrator permissions.
- You have installed the Local Storage Operator.
- You have attached local disks to OpenShift Container Platform nodes.
-
You have access to the OpenShift Container Platform web console and the command-line interface (CLI).
oc
Procedure
To enable automatic discovery of local devices from the web console:
- Click Operators → Installed Operators.
-
In the namespace, click Local Storage.
openshift-local-storage - Click the Local Volume Discovery tab.
- Click Create Local Volume Discovery and then select either Form view or YAML view.
-
Configure the object parameters.
LocalVolumeDiscovery Click Create.
The Local Storage Operator creates a local volume discovery instance named
.auto-discover-devices
To display a continuous list of available devices on a node:
- Log in to the OpenShift Container Platform web console.
- Navigate to Compute → Nodes.
- Click the node name that you want to open. The "Node Details" page is displayed.
Select the Disks tab to display the list of the selected devices.
The device list updates continuously as local disks are added or removed. You can filter the devices by name, status, type, model, capacity, and mode.
To automatically provision local volumes for the discovered devices from the web console:
- Navigate to Operators → Installed Operators and select Local Storage from the list of Operators.
- Select Local Volume Set → Create Local Volume Set.
- Enter a volume set name and a storage class name.
Choose All nodes or Select nodes to apply filters accordingly.
NoteOnly worker nodes are available, regardless of whether you filter using All nodes or Select nodes.
Select the disk type, mode, size, and limit you want to apply to the local volume set, and click Create.
A message displays after several minutes, indicating that the "Operator reconciled successfully."
Alternatively, to provision local volumes for the discovered devices from the CLI:
Create an object YAML file to define the local volume set, such as
, as shown in the following example:local-volume-set.yamlapiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: local-sc1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes:2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM- 1
- Determines the storage class that is created for persistent volumes that are provisioned from discovered devices. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes.
- 2
- When using the local volume set feature, the Local Storage Operator does not support the use of logical volume management (LVM) devices.
Create the local volume set object:
$ oc apply -f local-volume-set.yamlVerify that the local persistent volumes were dynamically provisioned based on the storage class:
$ oc get pvExample output
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m
Results are deleted after they are removed from the node. Symlinks must be manually removed.
5.2.7. Using tolerations with Local Storage Operator pods Copiar enlaceEnlace copiado en el portapapeles!
Taints can be applied to nodes to prevent them from running general workloads. To allow the Local Storage Operator to use tainted nodes, you must add tolerations to the
Pod
DaemonSet
You apply tolerations to the Local Storage Operator pod through the
LocalVolume
Taints and tolerations consist of a key, value, and effect. As an argument, it is expressed as
key=value:effect
Prerequisites
- The Local Storage Operator is installed.
- Local disks are attached to OpenShift Container Platform nodes with a taint.
- Tainted nodes are expected to provision local storage.
Procedure
To configure local volumes for scheduling on tainted nodes:
Modify the YAML file that defines the
and add thePodspec, as shown in the following example:LocalVolumeapiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" spec: tolerations: - key: localstorage1 operator: Equal2 value: "localstorage"3 storageClassDevices: - storageClassName: "local-sc" volumeMode: Block4 devicePaths:5 - /dev/xvdg- 1
- Specify the key that you added to the node.
- 2
- Specify the
Equaloperator to require thekey/valueparameters to match. If operator isExists, the system checks that the key exists and ignores the value. If operator isEqual, then the key and value must match. - 3
- Specify the value
localof the tainted node. - 4
- The volume mode, either
FilesystemorBlock, defining the type of the local volumes. - 5
- The path containing a list of local storage devices to choose from.
Optional: To create local persistent volumes on only tainted nodes, modify the YAML file and add the
spec, as shown in the following example:LocalVolumespec: tolerations: - key: node-role.kubernetes.io/master operator: Exists
The defined tolerations will be passed to the resulting daemon sets, allowing the diskmaker and provisioner pods to be created for nodes that contain the specified taints.
5.2.8. Local Storage Operator Metrics Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform provides the following metrics for the Local Storage Operator:
-
: total number of discovered devices on each node
lso_discovery_disk_count -
: total number of PVs created by
lso_lvset_provisioned_PV_countobjectsLocalVolumeSet -
: total number of disks that Local Storage Operator did not select for provisioning because of mismatching criteria
lso_lvset_unmatched_disk_count -
: number of devices with PVs that no longer match
lso_lvset_orphaned_symlink_countobject criteriaLocalVolumeSet -
: number of devices with PVs that no longer match
lso_lv_orphaned_symlink_countobject criteriaLocalVolume -
: total number of provisioned PVs for
lso_lv_provisioned_PV_countLocalVolume
To use these metrics, be sure to:
- Enable support for monitoring when installing the Local Storage Operator.
-
When upgrading to OpenShift Container Platform 4.9 or later, enable metric support manually by adding the label to the namespace.
operator-metering=true
5.2.9. Deleting the Local Storage Operator resources Copiar enlaceEnlace copiado en el portapapeles!
5.2.9.1. Removing a local volume or local volume set Copiar enlaceEnlace copiado en el portapapeles!
Occasionally, local volumes and local volume sets must be deleted. While removing the entry in the resource and deleting the persistent volume is typically enough, if you want to reuse the same device path or have it managed by a different storage class, then additional steps are needed.
The following procedure outlines an example for removing a local volume. The same procedure can also be used to remove symlinks for a local volume set custom resource.
Prerequisites
The persistent volume must be in a
orReleasedstate.AvailableWarningDeleting a persistent volume that is still in use can result in data loss or corruption.
Procedure
Edit the previously created local volume to remove any unwanted disks.
Edit the cluster resource:
$ oc edit localvolume <local_volume_name> -n openshift-local-storage-
Navigate to the lines under , and delete any representing unwanted disks.
devicePaths
Delete any persistent volumes created.
$ oc delete pv <pv_name>Delete directory and included symlinks on the node.
WarningThe following step involves accessing a node as the root user. Modifying the state of the node beyond the steps in this procedure could result in cluster instability.
$ oc debug node/<node_name> -- chroot /host rm -rf /mnt/local-storage/<sc_name>1 - 1
- The name of the storage class used to create the local volumes.
5.2.9.2. Uninstalling the Local Storage Operator Copiar enlaceEnlace copiado en el portapapeles!
To uninstall the Local Storage Operator, you must remove the Operator and all created resources in the
openshift-local-storage
Uninstalling the Local Storage Operator while local storage PVs are still in use is not recommended. While the PVs will remain after the Operator’s removal, there might be indeterminate behavior if the Operator is uninstalled and reinstalled without removing the PVs and local storage resources.
Prerequisites
- Access to the OpenShift Container Platform web console.
Procedure
Delete any local volume resources installed in the project, such as
,localvolume, andlocalvolumesetby running the following commands:localvolumediscovery$ oc delete localvolume --all --all-namespaces$ oc delete localvolumeset --all --all-namespaces$ oc delete localvolumediscovery --all --all-namespacesUninstall the Local Storage Operator from the web console.
- Log in to the OpenShift Container Platform web console.
- Navigate to Operators → Installed Operators.
- Type Local Storage into the filter box to locate the Local Storage Operator.
-
Click the Options menu
at the end of the Local Storage Operator.
- Click Uninstall Operator.
- Click Remove in the window that appears.
The PVs created by the Local Storage Operator will remain in the cluster until deleted. After these volumes are no longer in use, delete them by running the following command:
$ oc delete pv <pv-name>Delete the
project by running the following command:openshift-local-storage$ oc delete project openshift-local-storage
5.3. Persistent storage using hostPath Copiar enlaceEnlace copiado en el portapapeles!
A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node’s filesystem into your pod. Most pods will not need a hostPath volume, but it does offer a quick option for testing should an application require it.
The cluster administrator must configure pods to run as privileged. This grants access to pods in the same node.
5.3.1. Overview Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform supports hostPath mounting for development and testing on a single-node cluster.
In a production cluster, you would not use hostPath. Instead, a cluster administrator would provision a network resource, such as a GCE Persistent Disk volume, an NFS share, or an Amazon EBS volume. Network resources support the use of storage classes to set up dynamic provisioning.
A hostPath volume must be provisioned statically.
Do not mount to the container root,
/
/host
/
/host
apiVersion: v1
kind: Pod
metadata:
name: test-host-mount
spec:
containers:
- image: registry.access.redhat.com/ubi8/ubi
name: test-container
command: ['sh', '-c', 'sleep 3600']
volumeMounts:
- mountPath: /host
name: host-slash
volumes:
- name: host-slash
hostPath:
path: /
type: ''
5.3.2. Statically provisioning hostPath volumes Copiar enlaceEnlace copiado en el portapapeles!
A pod that uses a hostPath volume must be referenced by manual (static) provisioning.
Procedure
Define the persistent volume (PV) by creating a
file with thepv.yamlobject definition:PersistentVolumeapiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume1 labels: type: local spec: storageClassName: manual2 capacity: storage: 5Gi accessModes: - ReadWriteOnce3 persistentVolumeReclaimPolicy: Retain hostPath: path: "/mnt/data"4 - 1
- The name of the volume. This name is how the volume is identified by persistent volume (PV) claims or pods.
- 2
- Used to bind persistent volume claim (PVC) requests to the PV.
- 3
- The volume can be mounted as
read-writeby a single node. - 4
- The configuration file specifies that the volume is at
/mnt/dataon the cluster’s node. To avoid corrupting your host system, do not mount to the container root,/, or any path that is the same in the host and the container. You can safely mount the host by using/host
Create the PV from the file:
$ oc create -f pv.yamlDefine the PVC by creating a
file with thepvc.yamlobject definition:PersistentVolumeClaimapiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manualCreate the PVC from the file:
$ oc create -f pvc.yaml
5.3.3. Mounting the hostPath share in a privileged pod Copiar enlaceEnlace copiado en el portapapeles!
After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod.
Prerequisites
- A persistent volume claim exists that is mapped to the underlying hostPath share.
Procedure
Create a privileged pod that mounts the existing persistent volume claim:
apiVersion: v1 kind: Pod metadata: name: pod-name1 spec: containers: ... securityContext: privileged: true2 volumeMounts: - mountPath: /data3 name: hostpath-privileged ... securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume4 - 1
- The name of the pod.
- 2
- The pod must run as privileged to access the node’s storage.
- 3
- The path to mount the host path share inside the privileged pod. Do not mount to the container root,
/, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host/dev/ptsfiles. It is safe to mount the host by using/host. - 4
- The name of the
PersistentVolumeClaimobject that has been previously created.
5.4. Persistent storage using Logical Volume Manager Storage Copiar enlaceEnlace copiado en el portapapeles!
Logical Volume Manager (LVM) Storage uses the TopoLVM CSI driver to dynamically provision local storage on single-node OpenShift clusters.
LVM Storage creates thin-provisioned volumes using Logical Volume Manager and provides dynamic provisioning of block storage on a limited resources single-node OpenShift cluster.
You can create volume groups, persistent volume claims (PVCs), volume snapshots, and volume clones by using LVM Storage.
5.4.1. Logical Volume Manager Storage installation Copiar enlaceEnlace copiado en el portapapeles!
You can install Logical Volume Manager (LVM) Storage on a single-node OpenShift cluster and configure it to dynamically provision storage for your workloads.
You can deploy LVM Storage on single-node OpenShift clusters by using the OpenShift Container Platform CLI (
oc
5.4.1.1. Prerequisites to install LVM Storage Copiar enlaceEnlace copiado en el portapapeles!
The prerequisites to install LVM Storage are as follows:
- Ensure that you have a minimum of 10 milliCPU and 100 MiB of RAM.
- Ensure that every managed cluster has dedicated disks that are used to provision storage. LVM Storage uses only those disks that are empty and do not contain file system signatures. To ensure that the disks are empty and do not contain file system signatures, wipe the disks before using them.
Before installing LVM Storage in a private CI environment where you can reuse the storage devices that you configured in the previous LVM Storage installation, ensure that you have wiped the disks that are not in use. If you do not wipe the disks before installing LVM Storage, you cannot reuse the disks without manual intervention.
NoteYou cannot wipe the disks that are in use.
- If you want to install LVM Storage by using Red Hat Advanced Cluster Management (RHACM), ensure that you have installed RHACM on an OpenShift Container Platform cluster. See the Installing LVM Storage using RHACM section.
5.4.1.2. Installing LVM Storage using OpenShift Container Platform Web Console Copiar enlaceEnlace copiado en el portapapeles!
You can install LVM Storage using the Red Hat OpenShift Container Platform OperatorHub.
Prerequisites
- You have access to the single-node OpenShift cluster.
-
You are using an account with the and Operator installation permissions.
cluster-admin
Procedure
- Log in to the OpenShift Container Platform Web Console.
- Click Operators → OperatorHub.
-
Scroll or type into the Filter by keyword box to find LVM Storage.
LVM Storage - Click Install.
Set the following options on the Install Operator page:
- Update Channel as stable-4.12.
- Installation Mode as A specific namespace on the cluster.
-
Installed Namespace as Operator recommended namespace openshift-storage. If the namespace does not exist, it is created during the operator installation.
openshift-storage Approval Strategy as Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
- Click Install.
Verification steps
- Verify that LVM Storage shows a green tick, indicating successful installation.
5.4.1.3. Installing LVM Storage using RHACM Copiar enlaceEnlace copiado en el portapapeles!
LVM Storage is deployed on single-node OpenShift clusters using Red Hat Advanced Cluster Management (RHACM). You create a
Policy
PlacementRule
Prerequisites
-
Access to the RHACM cluster using an account with and Operator installation permissions.
cluster-admin - Dedicated disks on each single-node OpenShift cluster to be used by LVM Storage.
- The single-node OpenShift cluster needs to be managed by RHACM, either imported or created.
Procedure
- Log in to the RHACM CLI using your OpenShift Container Platform credentials.
Create a namespace in which you will create policies.
# oc create ns lvms-policy-nsTo create a policy, save the following YAML to a file with a name such as
:policy-lvms-operator.yamlapiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-install-lvms spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector:1 matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-install-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-install-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: install-lvms spec: disabled: false remediationAction: enforce policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-lvms spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-storage - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace remediationAction: enforce severity: low - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: - name: vg1 deviceSelector:2 paths: - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10 nodeSelector:3 nodeSelectorTerms: - matchExpressions: - key: app operator: In values: - test1 remediationAction: enforce severity: low- 1
- Replace the key and value in
PlacementRule.spec.clusterSelectorto match the labels set on the single-node OpenShift clusters on which you want to install LVM Storage. - 2
- To control or restrict the volume group to your preferred disks, you can manually specify the local paths of the disks in the
deviceSelectorsection of theLVMClusterYAML. - 3
- To add a node filter, which is a subset of the additional worker nodes, specify the required filter in the
nodeSelectorsection. LVM Storage detects and uses the additional worker nodes when the new nodes show up.
ImportantThis
node filter matching is not the same as the pod label matching.nodeSelectorCreate the policy in the namespace by running the following command:
# oc create -f policy-lvms-operator.yaml -n lvms-policy-ns1 - 1
- The
policy-lvms-operator.yamlis the name of the file to which the policy is saved.
This creates a
, aPolicy, and aPlacementRuleobject in thePlacementBindingnamespace. The policy creates alvms-policy-ns,Namespace,OperatorGroup, andSubscriptionresource on the clusters that match the placement rule. This deploys the Operator on the single-node OpenShift clusters which match the selection criteria and configures it to set up the required resources to provision storage. The Operator uses all the disks specified in theLVMClusterCR. If no disks are specified, the Operator uses all the unused disks on the single-node OpenShift node.LVMClusterImportantAfter a device is added to the
, it cannot be removed.LVMCluster
5.4.1.4. Limitations to configure the size of the devices used in LVM Storage Copiar enlaceEnlace copiado en el portapapeles!
The limitations to configure the size of the devices that you can use to provision storage using LVM Storage are as follows:
- The total storage size that you can provision is limited by the size of the underlying Logical Volume Manager (LVM) thin pool and the over-provisioning factor.
The size of the logical volume depends on the size of the Physical Extent (PE) and the Logical Extent (LE).
- You can define the size of PE and LE during the physical and logical device creation.
- The default PE and LE size is 4 MB.
- If the size of the PE is increased, the maximum size of the LVM is determined by the kernel limits and your disk space.
| Architecture | RHEL 6 | RHEL 7 | RHEL 8 | RHEL 9 |
|---|---|---|---|---|
| 32-bit | 16 TB | - | - | - |
| 64-bit | 8 EB [1] 100 TB [2] | 8 EB [1] 500 TB [2] | 8 EB | 8 EB |
- Theoretical size.
- Tested size.
5.4.2. Provisioning storage using LVM Storage Copiar enlaceEnlace copiado en el portapapeles!
You can provision persistent volume claims (PVCs) using the storage class that is created during the Operator installation. You can provision block and file PVCs, however, the storage is allocated only when a pod that uses the PVC is created.
LVM Storage provisions PVCs in units of 1 GiB. The requested storage is rounded up to the nearest GiB.
Procedure
Identify the
that is created when LVM Storage is deployed.StorageClassThe
name is in the format,StorageClass. Thelvms-<device-class-name>is the name of the device class that you provided in thedevice-class-nameof theLVMClusterYAML. For example, if thePolicyis calleddeviceClass, then thevg1name isstorageClass.lvms-vg1The
of the storage class is set tovolumeBindingMode.WaitForFirstConsumerTo create a PVC where the application requires storage, save the following YAML to a file with a name such as
.pvc.yamlExample YAML to create a PVC
# block pvc apiVersion: v1 kind: PersistentVolumeClaim metadata: name: lvm-block-1 namespace: default spec: accessModes: - ReadWriteOnce volumeMode: Block resources: requests: storage: 10Gi storageClassName: lvms-vg1 --- # file pvc apiVersion: v1 kind: PersistentVolumeClaim metadata: name: lvm-file-1 namespace: default spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Gi storageClassName: lvms-vg1Create the PVC by running the following command:
# oc create -f pvc.yaml -ns <application_namespace>The created PVCs remain in
state until you deploy the pods that use them.pending
5.4.3. Expanding PVCs Copiar enlaceEnlace copiado en el portapapeles!
To leverage the new storage after adding additional capacity, you can expand existing persistent volume claims (PVCs) with LVM Storage.
Prerequisites
- Dynamic provisioning is used.
-
The controlling object has
StorageClassset toallowVolumeExpansion.true
Procedure
Modify the
field in the desired PVC resource to the new size by running the following command:.spec.resources.requests.storageoc patch <pvc_name> -n <application_namespace> -p '{ "spec": { "resources": { "requests": { "storage": "<desired_size>" }}}}'-
Watch the field of the PVC to see if the resize has completed. OpenShift Container Platform adds the
status.conditionscondition to the PVC during expansion, which is removed after the expansion completes.Resizing
5.4.4. Upgrading LVM Storage on single-node OpenShift clusters Copiar enlaceEnlace copiado en el portapapeles!
Currently, it is not possible to upgrade from OpenShift Data Foundation Logical Volume Manager Operator 4.11 to LVM Storage 4.12 on single-node OpenShift clusters.
The data will not be preserved during this process.
Procedure
- Back up any data that you want to preserve on the persistent volume claims (PVCs).
- Delete all PVCs provisioned by the OpenShift Data Foundation Logical Volume Manager Operator and their pods.
- Reinstall LVM Storage on OpenShift Container Platform 4.12.
- Recreate the workloads.
- Copy the backup data to the PVCs created after upgrading to 4.12.
5.4.5. Volume snapshots for single-node OpenShift Copiar enlaceEnlace copiado en el portapapeles!
You can take volume snapshots of persistent volumes (PVs) that are provisioned by LVM Storage. You can also create volume snapshots of the cloned volumes. Volume snapshots help you to do the following:
Back up your application data.
ImportantVolume snapshots are located on the same devices as the original data. To use the volume snapshots as backups, you need to move the snapshots to a secure location. You can use OpenShift API for Data Protection backup and restore solutions.
- Revert to a state at which the volume snapshot was taken.
5.4.5.1. Creating volume snapshots in single-node OpenShift Copiar enlaceEnlace copiado en el portapapeles!
You can create volume snapshots based on the available capacity of the thin pool and the overprovisioning limits. LVM Storage creates a
VolumeSnapshotClass
lvms-<deviceclass-name>
Prerequisites
-
You ensured that the persistent volume claim (PVC) is in state. This is required for a consistent snapshot.
Bound - You stopped all the I/O to the PVC before taking the snapshot.
Procedure
-
Log in to the single-node OpenShift for which you need to run the command.
oc Save the following YAML to a file with a name such as
.lvms-vol-snapshot.yamlExample YAML to create a volume snapshot
apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: lvm-block-1-snap spec: volumeSnapshotClassName: lvms-vg1 source: persistentVolumeClaimName: lvm-block-1Create the snapshot by running the following command in the same namespace as the PVC:
# oc create -f lvms-vol-snapshot.yaml
A read-only copy of the PVC is created as a volume snapshot.
5.4.5.2. Restoring volume snapshots in single-node OpenShift Copiar enlaceEnlace copiado en el portapapeles!
When you restore a volume snapshot, a new persistent volume claim (PVC) is created. The restored PVC is independent of the volume snapshot and the source PVC.
Prerequisites
- The storage class must be the same as that of the source PVC.
The size of the requested PVC must be the same as that of the source volume of the snapshot.
ImportantA snapshot must be restored to a PVC of the same size as the source volume of the snapshot. If a larger PVC is required, you can resize the PVC after the snapshot is restored successfully.
Procedure
- Identify the storage class name of the source PVC and volume snapshot name.
Save the following YAML to a file with a name such as
to restore the snapshot.lvms-vol-restore.yamlExample YAML to restore a PVC.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: lvm-block-1-restore spec: accessModes: - ReadWriteOnce volumeMode: Block Resources: Requests: storage: 2Gi storageClassName: lvms-vg1 dataSource: name: lvm-block-1-snap kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.ioCreate the policy by running the following command in the same namespace as the snapshot:
# oc create -f lvms-vol-restore.yaml
5.4.5.3. Deleting volume snapshots in single-node OpenShift Copiar enlaceEnlace copiado en el portapapeles!
You can delete volume snapshots resources and persistent volume claims (PVCs).
Procedure
Delete the volume snapshot resource by running the following command:
# oc delete volumesnapshot <volume_snapshot_name> -n <namespace>NoteWhen you delete a persistent volume claim (PVC), the snapshots of the PVC are not deleted.
To delete the restored volume snapshot, delete the PVC that was created to restore the volume snapshot by running the following command:
# oc delete pvc <pvc_name> -n <namespace>
5.4.6. Volume cloning for single-node OpenShift Copiar enlaceEnlace copiado en el portapapeles!
A clone is a duplicate of an existing storage volume that can be used like any standard volume.
5.4.6.1. Creating volume clones in single-node OpenShift Copiar enlaceEnlace copiado en el portapapeles!
You create a clone of a volume to make a point-in-time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size.
The cloned PVC has write access.
Prerequisites
-
You ensured that the PVC is in state. This is required for a consistent snapshot.
Bound -
You ensured that the is the same as that of the source PVC.
StorageClass
Procedure
- Identify the storage class of the source PVC.
To create a volume clone, save the following YAML to a file with a name such as
:lvms-vol-clone.yamlExample YAML to clone a volume
apiVersion: v1 kind: PersistentVolumeClaim Metadata: name: lvm-block-1-clone Spec: storageClassName: lvms-vg1 dataSource: name: lvm-block-1 kind: PersistentVolumeClaim accessModes: - ReadWriteOnce volumeMode: Block Resources: Requests: storage: 2GiCreate the policy in the same namespace as the source PVC by running the following command:
# oc create -f lvms-vol-clone.yaml
5.4.6.2. Deleting cloned volumes in single-node OpenShift Copiar enlaceEnlace copiado en el portapapeles!
You can delete cloned volumes.
Procedure
To delete the cloned volume, delete the cloned PVC by running the following command:
# oc delete pvc <clone_pvc_name> -n <namespace>
5.4.7. Monitoring LVM Storage Copiar enlaceEnlace copiado en el portapapeles!
To enable cluster monitoring, you must add the following label in the namespace where you have installed LVM Storage:
openshift.io/cluster-monitoring=true
For information about enabling cluster monitoring in RHACM, see Observability and Adding custom metrics.
5.4.7.1. Metrics Copiar enlaceEnlace copiado en el portapapeles!
You can monitor LVM Storage by viewing the metrics.
The following table describes the
topolvm
| Alert | Description |
|---|---|
|
| Indicates the percentage of data space used in the LVM thinpool. |
|
| Indicates the percentage of metadata space used in the LVM thinpool. |
|
| Indicates the size of the LVM thin pool in bytes. |
|
| Indicates the available space in the LVM volume group in bytes. |
|
| Indicates the size of the LVM volume group in bytes. |
|
| Indicates the available over-provisioned size of the LVM thin pool in bytes. |
Metrics are updated every 10 minutes or when there is a change, such as a new logical volume creation, in the thin pool.
5.4.7.2. Alerts Copiar enlaceEnlace copiado en el portapapeles!
When the thin pool and volume group reach maximum storage capacity, further operations fail. This can lead to data loss.
LVM Storage sends the following alerts when the usage of the thin pool and volume group exceeds a certain value:
| Alert | Description |
|---|---|
|
| This alert is triggered when both the volume group and thin pool usage exceeds 75% on nodes. Data deletion or volume group expansion is required. |
|
| This alert is triggered when both the volume group and thin pool usage exceeds 85% on nodes. In this case, the volume group is critically full. Data deletion or volume group expansion is required. |
|
| This alert is triggered when the thin pool data uusage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required. |
|
| This alert is triggered when the thin pool data usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required. |
|
| This alert is triggered when the thin pool metadata usage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required. |
|
| This alert is triggered when the thin pool metadata usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required. |
5.4.8. Downloading log files and diagnostic information using must-gather Copiar enlaceEnlace copiado en el portapapeles!
When LVM Storage is unable to automatically resolve a problem, use the must-gather tool to collect the log files and diagnostic information so that you or the Red Hat Support can review the problem and determine a solution.
Procedure
Run the
command from the client connected to the LVM Storage cluster:must-gather$ oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v4.12 --dest-dir=<directory_name>
5.4.8.1. Investigating a PVC stuck in the Pending state Copiar enlaceEnlace copiado en el portapapeles!
A persistent volume claim (PVC) can get stuck in a
Pending
- Insufficient computing resources
- Network problems
- Mismatched storage class or node selector
- No available volumes
-
The node with the persistent volume (PV) is in a state
Not Ready
Identify the cause by using the
oc describe
Procedure
Retrieve the list of PVCs by running the following command:
$ oc get pvcExample output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvms-test Pending lvms-vg1 11sInspect the events associated with a PVC stuck in the
state by running the following command:Pending$ oc describe pvc <pvc_name>1 - 1
- Replace
<pvc_name>with the name of the PVC. For example,lvms-vg1.
Example output
Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 4s (x2 over 17s) persistentvolume-controller storageclass.storage.k8s.io "lvms-vg1" not found
5.4.8.2. Recovering from missing LVMS or Operator components Copiar enlaceEnlace copiado en el portapapeles!
If you encounter a storage class "not found" error, check the
LVMCluster
LVMCluster
Procedure
Verify the presence of the LVMCluster resource by running the following command:
$ oc get lvmcluster -n openshift-storageExample output
NAME AGE my-lvmcluster 65mIf the cluster doesn’t have an
resource, create one by running the following command:LVMCluster$ oc create -n openshift-storage -f <custom_resource>1 - 1
- Replace
<custom_resource>with a custom resource URL or file tailored to your requirements.
Example custom resource
apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: - name: vg1 default: true thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10Check that all the pods from LVMS are in the
state in theRunningnamespace by running the following command:openshift-storage$ oc get pods -n openshift-storageExample output
NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m vg-manager-r6zdv 1/1 Running 0 66mThe expected output is one running instance of
andlvms-operator. One instance ofvg-managerandtopolvm-controlleris expected for each node.topolvm-nodeIf
is stuck in thetopolvm-nodestate, there is a failure to locate an available disk for LVMS to use. To retrieve the information necessary to troubleshoot, review the logs of theInitpod by running the following command:vg-manager$ oc logs -l app.kubernetes.io/component=vg-manager -n openshift-storage
5.4.8.4. Recovering from disk failure Copiar enlaceEnlace copiado en el portapapeles!
If you see a failure message while inspecting the events associated with the persistent volume claim (PVC), there might be a problem with the underlying volume or disk. Disk and volume provisioning issues often result with a generic error first, such as
Failed to provision volume with StorageClass <storage_class_name>
Procedure
Inspect the events associated with a PVC by running the following command:
$ oc describe pvc <pvc_name>1 - 1
- Replace
<pvc_name>with the name of the PVC. Here are some examples of disk or volume failure error messages and their causes:- Failed to check volume existence: Indicates a problem in verifying whether the volume already exists. Volume verification failure can be caused by network connectivity problems or other failures.
- Failed to bind volume: Failure to bind a volume can happen if the persistent volume (PV) that is available does not match the requirements of the PVC.
- FailedMount or FailedUnMount: This error indicates problems when trying to mount the volume to a node or unmount a volume from a node. If the disk has failed, this error might appear when a pod tries to use the PVC.
-
Volume is already exclusively attached to one node and can’t be attached to another: This error can appear with storage solutions that do not support access modes.
ReadWriteMany
- Establish a direct connection to the host where the problem is occurring.
- Resolve the disk issue.
After you have resolved the issue with the disk, you might need to perform the forced cleanup procedure if failure messages persist or reoccur.
5.4.8.5. Performing a forced cleanup Copiar enlaceEnlace copiado en el portapapeles!
If disk- or node-related problems persist after you complete the troubleshooting procedures, it might be necessary to perform a forced cleanup procedure. A forced cleanup is used to comprehensively address persistent issues and ensure the proper functioning of the LVMS.
Prerequisites
- All of the persistent volume claims (PVCs) created using the logical volume manager storage (LVMS) driver have been removed.
- The pods using those PVCs have been stopped.
Procedure
Switch to the
namespace by running the following command:openshift-storage$ oc project openshift-storageEnsure there is no
custom resource (CR) remaining by running the following command:Logical Volume$ oc get logicalvolumeExample output
No resources foundIf there are any
CRs remaining, remove their finalizers by running the following command:LogicalVolume$ oc patch logicalvolume <name> -p '{"metadata":{"finalizers":[]}}' --type=merge1 - 1
- Replace
<name>with the name of the CR.
After removing their finalizers, delete the CRs by running the following command:
$ oc delete logicalvolume <name>1 - 1
- Replace
<name>with the name of the CR.
Make sure there are no
CRs left by running the following command:LVMVolumeGroup$ oc get lvmvolumegroupExample output
No resources foundIf there are any
CRs left, remove their finalizers by running the following command:LVMVolumeGroup$ oc patch lvmvolumegroup <name> -p '{"metadata":{"finalizers":[]}}' --type=merge1 - 1
- Replace
<name>with the name of the CR.
After removing their finalizers, delete the CRs by running the following command:
$ oc delete lvmvolumegroup <name>1 - 1
- Replace
<name>with the name of the CR.
Remove any
CRs by running the following command:LVMVolumeGroupNodeStatus$ oc delete lvmvolumegroupnodestatus --allRemove the
CR by running the following command:LVMCluster$ oc delete lvmcluster --all
Chapter 6. Using Container Storage Interface (CSI) Copiar enlaceEnlace copiado en el portapapeles!
6.1. Configuring CSI volumes Copiar enlaceEnlace copiado en el portapapeles!
The Container Storage Interface (CSI) allows OpenShift Container Platform to consume storage from storage back ends that implement the CSI interface as persistent storage.
OpenShift Container Platform 4.12 supports version 1.6.0 of the CSI specification.
6.1.1. CSI architecture Copiar enlaceEnlace copiado en el portapapeles!
CSI drivers are typically shipped as container images. These containers are not aware of OpenShift Container Platform where they run. To use CSI-compatible storage back end in OpenShift Container Platform, the cluster administrator must deploy several components that serve as a bridge between OpenShift Container Platform and the storage driver.
The following diagram provides a high-level overview about the components running in pods in the OpenShift Container Platform cluster.
It is possible to run multiple CSI drivers for different storage back ends. Each driver needs its own external controllers deployment and daemon set with the driver and CSI registrar.
6.1.1.1. External CSI controllers Copiar enlaceEnlace copiado en el portapapeles!
External CSI controllers is a deployment that deploys one or more pods with five containers:
-
The snapshotter container watches and
VolumeSnapshotobjects and is responsible for the creation and deletion ofVolumeSnapshotContentobject.VolumeSnapshotContent -
The resizer container is a sidecar container that watches for updates and triggers
PersistentVolumeClaimoperations against a CSI endpoint if you request more storage onControllerExpandVolumeobject.PersistentVolumeClaim -
An external CSI attacher container translates and
attachcalls from OpenShift Container Platform to respectivedetachandControllerPublishcalls to the CSI driver.ControllerUnpublish -
An external CSI provisioner container that translates and
provisioncalls from OpenShift Container Platform to respectivedeleteandCreateVolumecalls to the CSI driver.DeleteVolume - A CSI driver container.
The CSI attacher and CSI provisioner containers communicate with the CSI driver container using UNIX Domain Sockets, ensuring that no CSI communication leaves the pod. The CSI driver is not accessible from outside of the pod.
The
attach
detach
provision
delete
The external attacher must also run for CSI drivers that do not support third-party
attach
detach
ControllerPublish
ControllerUnpublish
6.1.1.2. CSI driver daemon set Copiar enlaceEnlace copiado en el portapapeles!
The CSI driver daemon set runs a pod on every node that allows OpenShift Container Platform to mount storage provided by the CSI driver to the node and use it in user workloads (pods) as persistent volumes (PVs). The pod with the CSI driver installed contains the following containers:
-
A CSI driver registrar, which registers the CSI driver into the service running on the node. The
openshift-nodeprocess running on the node then directly connects with the CSI driver using the UNIX Domain Socket available on the node.openshift-node - A CSI driver.
The CSI driver deployed on the node should have as few credentials to the storage back end as possible. OpenShift Container Platform will only use the node plugin set of CSI calls such as
NodePublish
NodeUnpublish
NodeStage
NodeUnstage
6.1.2. CSI drivers supported by OpenShift Container Platform Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform installs certain CSI drivers by default, giving users storage options that are not possible with in-tree volume plugins.
To create CSI-provisioned persistent volumes that mount to these supported storage assets, OpenShift Container Platform installs the necessary CSI driver Operator, the CSI driver, and the required storage class by default. For more details about the default namespace of the Operator and driver, see the documentation for the specific CSI Driver Operator.
The following table describes the CSI drivers that are installed with OpenShift Container Platform and which CSI features they support, such as volume snapshots, cloning, and resize.
The AWS EFS and GCP Filestore CSI drivers are not installed by default, and must be installed manually. For instructions on installing the AWS EFS CSI driver, see Setting up AWS Elastic File Service CSI Driver Operator. For instructions on installing the GCP Filestore CSI driver, see Google Compute Platform Filestore CSI Driver Operator.
The following table describes the CSI drivers that are installed with OpenShift Container Platform, supported by OpenShift Container Platform, and which CSI features they support, such as volume snapshots and resize.
If your CSI driver is not listed in the following table, you must follow the installation instructions provided by your CSI storage vendor to use their supported CSI features.
For a list of third-party-certified CSI drivers, see the Red Hat ecosystem portal under Additional resources.
| CSI driver | CSI volume snapshots | CSI cloning | CSI resize |
|---|---|---|---|
| AliCloud Disk |
✅ |
- |
✅ |
| AWS EBS |
✅ |
- |
✅ |
| AWS EFS |
- |
- |
- |
| Google Compute Platform (GCP) persistent disk (PD) |
✅ |
✅ |
✅ |
| GCP Filestore |
✅ |
- |
✅ |
| IBM VPC Block |
✅[3] |
- |
✅[3] |
| LVM Storage |
✅ |
✅ |
✅ |
| Microsoft Azure Disk |
✅ |
✅ |
✅ |
| Microsoft Azure Stack Hub |
✅ |
✅ |
✅ |
| Microsoft Azure File |
- |
- |
✅ |
| OpenStack Cinder |
✅ |
✅ |
✅ |
| OpenShift Data Foundation |
✅ |
✅ |
✅ |
| OpenStack Manila |
✅ |
- |
- |
| Red Hat Virtualization (oVirt) |
- |
- |
✅ |
| VMware vSphere |
✅[1] |
- |
✅[2] |
1.
- Requires vSphere version 7.0 Update 3 or later for both vCenter Server and ESXi.
- Does not support fileshare volumes.
2.
- Offline volume expansion: minimum required vSphere version is 6.7 Update 3 P06
- Online volume expansion: minimum required vSphere version is 7.0 Update 2.
3.
- Does not support offline snapshots or resize. Volume must be attached to a running pod.
If your CSI driver is not listed in the preceding table, you must follow the installation instructions provided by your CSI storage vendor to use their supported CSI features.
6.1.3. Dynamic provisioning Copiar enlaceEnlace copiado en el portapapeles!
Dynamic provisioning of persistent storage depends on the capabilities of the CSI driver and underlying storage back end. The provider of the CSI driver should document how to create a storage class in OpenShift Container Platform and the parameters available for configuration.
The created storage class can be configured to enable dynamic provisioning.
Procedure
Create a default storage class that ensures all PVCs that do not require any special storage class are provisioned by the installed CSI driver.
# oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class>1 annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: <provisioner-name>2 parameters: EOF
6.1.4. Example using the CSI driver Copiar enlaceEnlace copiado en el portapapeles!
The following example installs a default MySQL template without any changes to the template.
Prerequisites
- The CSI driver has been deployed.
- A storage class has been created for dynamic provisioning.
Procedure
Create the MySQL template:
# oc new-app mysql-persistentExample output
--> Deploying template "openshift/mysql-persistent" to project default ...# oc get pvcExample output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi RWO cinder 3s
6.1.5. Volume populators Copiar enlaceEnlace copiado en el portapapeles!
Volume populators use the
datasource
Volume population is currently enabled, and supported as a Technology Preview feature. However, OpenShift Container Platform does not ship with any volume populators.
Volume populators is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
For more information about volume populators, see Kubernetes volume populators.
6.2. CSI inline ephemeral volumes Copiar enlaceEnlace copiado en el portapapeles!
Container Storage Interface (CSI) inline ephemeral volumes allow you to define a
Pod
This feature is only available with supported Container Storage Interface (CSI) drivers.
CSI inline ephemeral volumes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
6.2.1. Overview of CSI inline ephemeral volumes Copiar enlaceEnlace copiado en el portapapeles!
Traditionally, volumes that are backed by Container Storage Interface (CSI) drivers can only be used with a
PersistentVolume
PersistentVolumeClaim
This feature allows you to specify CSI volumes directly in the
Pod
PersistentVolume
6.2.1.1. Support limitations Copiar enlaceEnlace copiado en el portapapeles!
By default, OpenShift Container Platform supports CSI inline ephemeral volumes with these limitations:
- Support is only available for CSI drivers. In-tree and FlexVolumes are not supported.
- The Shared Resource CSI Driver supports inline ephemeral volumes as a Technology Preview feature.
- Community or storage vendors provide other CSI drivers that support these volumes. Follow the installation instructions provided by the CSI driver provider.
CSI drivers might not have implemented the inline volume functionality, including
Ephemeral
Shared Resource CSI Driver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
6.2.2. Embedding a CSI inline ephemeral volume in the pod specification Copiar enlaceEnlace copiado en el portapapeles!
You can embed a CSI inline ephemeral volume in the
Pod
Procedure
-
Create the object definition and save it to a file.
Pod Embed the CSI inline ephemeral volume in the file.
my-csi-app.yaml
kind: Pod apiVersion: v1 metadata: name: my-csi-app spec: containers: - name: my-frontend image: busybox volumeMounts: - mountPath: "/data" name: my-csi-inline-vol command: [ "sleep", "1000000" ] volumes:1 - name: my-csi-inline-vol csi: driver: inline.storage.kubernetes.io volumeAttributes: foo: bar- 1
- The name of the volume that is used by pods.
Create the object definition file that you saved in the previous step.
$ oc create -f my-csi-app.yaml
6.4. CSI volume snapshots Copiar enlaceEnlace copiado en el portapapeles!
This document describes how to use volume snapshots with supported Container Storage Interface (CSI) drivers to help protect against data loss in OpenShift Container Platform. Familiarity with persistent volumes is suggested.
6.4.1. Overview of CSI volume snapshots Copiar enlaceEnlace copiado en el portapapeles!
A snapshot represents the state of the storage volume in a cluster at a particular point in time. Volume snapshots can be used to provision a new volume.
OpenShift Container Platform supports Container Storage Interface (CSI) volume snapshots by default. However, a specific CSI driver is required.
With CSI volume snapshots, a cluster administrator can:
- Deploy a third-party CSI driver that supports snapshots.
- Create a new persistent volume claim (PVC) from an existing volume snapshot.
- Take a snapshot of an existing PVC.
- Restore a snapshot as a different PVC.
- Delete an existing volume snapshot.
With CSI volume snapshots, an app developer can:
- Use volume snapshots as building blocks for developing application- or cluster-level storage backup solutions.
- Rapidly rollback to a previous development version.
- Use storage more efficiently by not having to make a full copy each time.
Be aware of the following when using volume snapshots:
- Support is only available for CSI drivers. In-tree and FlexVolumes are not supported.
- OpenShift Container Platform only ships with select CSI drivers. For CSI drivers that are not provided by an OpenShift Container Platform Driver Operator, it is recommended to use the CSI drivers provided by community or storage vendors. Follow the installation instructions furnished by the CSI driver provider.
-
CSI drivers may or may not have implemented the volume snapshot functionality. CSI drivers that have provided support for volume snapshots will likely use the sidecar. See documentation provided by the CSI driver for details.
csi-external-snapshotter
6.4.2. CSI snapshot controller and sidecar Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform provides a snapshot controller that is deployed into the control plane. In addition, your CSI driver vendor provides the CSI snapshot sidecar as a helper container that is installed during the CSI driver installation.
The CSI snapshot controller and sidecar provide volume snapshotting through the OpenShift Container Platform API. These external components run in the cluster.
The external controller is deployed by the CSI Snapshot Controller Operator.
6.4.2.1. External controller Copiar enlaceEnlace copiado en el portapapeles!
The CSI snapshot controller binds
VolumeSnapshot
VolumeSnapshotContent
VolumeSnapshotContent
6.4.2.2. External sidecar Copiar enlaceEnlace copiado en el portapapeles!
Your CSI driver vendor provides the
csi-external-snapshotter
CreateSnapshot
DeleteSnapshot
6.4.3. About the CSI Snapshot Controller Operator Copiar enlaceEnlace copiado en el portapapeles!
The CSI Snapshot Controller Operator runs in the
openshift-cluster-storage-operator
The CSI Snapshot Controller Operator installs the CSI snapshot controller, which runs in the
openshift-cluster-storage-operator
6.4.3.1. Volume snapshot CRDs Copiar enlaceEnlace copiado en el portapapeles!
During OpenShift Container Platform installation, the CSI Snapshot Controller Operator creates the following snapshot custom resource definitions (CRDs) in the
snapshot.storage.k8s.io/v1
VolumeSnapshotContentA snapshot taken of a volume in the cluster that has been provisioned by a cluster administrator.
Similar to the
object, thePersistentVolumeCRD is a cluster resource that points to a real snapshot in the storage back end.VolumeSnapshotContentFor manually pre-provisioned snapshots, a cluster administrator creates a number of
CRDs. These carry the details of the real volume snapshot in the storage system.VolumeSnapshotContentThe
CRD is not namespaced and is for use by a cluster administrator.VolumeSnapshotContentVolumeSnapshotSimilar to the
object, thePersistentVolumeClaimCRD defines a developer request for a snapshot. The CSI Snapshot Controller Operator runs the CSI snapshot controller, which handles the binding of aVolumeSnapshotCRD with an appropriateVolumeSnapshotCRD. The binding is a one-to-one mapping.VolumeSnapshotContentThe
CRD is namespaced. A developer uses the CRD as a distinct request for a snapshot.VolumeSnapshotVolumeSnapshotClassAllows a cluster administrator to specify different attributes belonging to a
object. These attributes may differ among snapshots taken of the same volume on the storage system, in which case they would not be expressed by using the same storage class of a persistent volume claim.VolumeSnapshotThe
CRD defines the parameters for theVolumeSnapshotClasssidecar to use when creating a snapshot. This allows the storage back end to know what kind of snapshot to dynamically create if multiple options are supported.csi-external-snapshotterDynamically provisioned snapshots use the
CRD to specify storage-provider-specific parameters to use when creating a snapshot.VolumeSnapshotClassThe
CRD is not namespaced and is for use by a cluster administrator to enable global configuration options for their storage back end.VolumeSnapshotContentClass
6.4.4. Volume snapshot provisioning Copiar enlaceEnlace copiado en el portapapeles!
There are two ways to provision snapshots: dynamically and manually.
6.4.4.1. Dynamic provisioning Copiar enlaceEnlace copiado en el portapapeles!
Instead of using a preexisting snapshot, you can request that a snapshot be taken dynamically from a persistent volume claim. Parameters are specified using a
VolumeSnapshotClass
6.4.4.2. Manual provisioning Copiar enlaceEnlace copiado en el portapapeles!
As a cluster administrator, you can manually pre-provision a number of
VolumeSnapshotContent
6.4.5. Creating a volume snapshot Copiar enlaceEnlace copiado en el portapapeles!
When you create a
VolumeSnapshot
Prerequisites
- Logged in to a running OpenShift Container Platform cluster.
-
A PVC created using a CSI driver that supports objects.
VolumeSnapshot - A storage class to provision the storage back end.
No pods are using the persistent volume claim (PVC) that you want to take a snapshot of.
WarningCreating a volume snapshot of a PVC that is in use by a pod can cause unwritten data and cached data to be excluded from the snapshot. To ensure that all data is written to the disk, delete the pod that is using the PVC before creating the snapshot.
Procedure
To dynamically create a volume snapshot:
Create a file with the
object described by the following YAML:VolumeSnapshotClassvolumesnapshotclass.yaml
apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io1 deletionPolicy: Delete- 1
- The name of the CSI driver that is used to create snapshots of this
VolumeSnapshotClassobject. The name must be the same as theProvisionerfield of the storage class that is responsible for the PVC that is being snapshotted.
NoteDepending on the driver that you used to configure persistent storage, additional parameters might be required. You can also use an existing
object.VolumeSnapshotClassCreate the object you saved in the previous step by entering the following command:
$ oc create -f volumesnapshotclass.yamlCreate a
object:VolumeSnapshotvolumesnapshot-dynamic.yaml
apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: volumeSnapshotClassName: csi-hostpath-snap1 source: persistentVolumeClaimName: myclaim2 - 1
- The request for a particular class by the volume snapshot. If the
volumeSnapshotClassNamesetting is absent and there is a default volume snapshot class, a snapshot is created with the default volume snapshot class name. But if the field is absent and no default volume snapshot class exists, then no snapshot is created. - 2
- The name of the
PersistentVolumeClaimobject bound to a persistent volume. This defines what you want to create a snapshot of. Required for dynamically provisioning a snapshot.
Create the object you saved in the previous step by entering the following command:
$ oc create -f volumesnapshot-dynamic.yaml
To manually provision a snapshot:
Provide a value for the
parameter as the source for the snapshot, in addition to defining volume snapshot class as shown above.volumeSnapshotContentNamevolumesnapshot-manual.yaml
apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: snapshot-demo spec: source: volumeSnapshotContentName: mycontent1 - 1
- The
volumeSnapshotContentNameparameter is required for pre-provisioned snapshots.
Create the object you saved in the previous step by entering the following command:
$ oc create -f volumesnapshot-manual.yaml
Verification
After the snapshot has been created in the cluster, additional details about the snapshot are available.
To display details about the volume snapshot that was created, enter the following command:
$ oc describe volumesnapshot mysnapThe following example displays details about the
volume snapshot:mysnapvolumesnapshot.yaml
apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: source: persistentVolumeClaimName: myclaim volumeSnapshotClassName: csi-hostpath-snap status: boundVolumeSnapshotContentName: snapcontent-1af4989e-a365-4286-96f8-d5dcd65d78d61 creationTime: "2020-01-29T12:24:30Z"2 readyToUse: true3 restoreSize: 500Mi- 1
- The pointer to the actual storage content that was created by the controller.
- 2
- The time when the snapshot was created. The snapshot contains the volume content that was available at this indicated time.
- 3
- If the value is set to
true, the snapshot can be used to restore as a new PVC.
If the value is set tofalse, the snapshot was created. However, the storage back end needs to perform additional tasks to make the snapshot usable so that it can be restored as a new volume. For example, Amazon Elastic Block Store data might be moved to a different, less expensive location, which can take several minutes.
To verify that the volume snapshot was created, enter the following command:
$ oc get volumesnapshotcontentThe pointer to the actual content is displayed. If the
field is populated, aboundVolumeSnapshotContentNameobject exists and the snapshot was created.VolumeSnapshotContent-
To verify that the snapshot is ready, confirm that the object has
VolumeSnapshot.readyToUse: true
6.4.6. Deleting a volume snapshot Copiar enlaceEnlace copiado en el portapapeles!
You can configure how OpenShift Container Platform deletes volume snapshots.
Procedure
Specify the deletion policy that you require in the
object, as shown in the following example:VolumeSnapshotClassvolumesnapshotclass.yaml
apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io deletionPolicy: Delete1 - 1
- When deleting the volume snapshot, if the
Deletevalue is set, the underlying snapshot is deleted along with theVolumeSnapshotContentobject. If theRetainvalue is set, both the underlying snapshot andVolumeSnapshotContentobject remain.
If theRetainvalue is set and theVolumeSnapshotobject is deleted without deleting the correspondingVolumeSnapshotContentobject, the content remains. The snapshot itself is also retained in the storage back end.
Delete the volume snapshot by entering the following command:
$ oc delete volumesnapshot <volumesnapshot_name>Example output
volumesnapshot.snapshot.storage.k8s.io "mysnapshot" deletedIf the deletion policy is set to
, delete the volume snapshot content by entering the following command:Retain$ oc delete volumesnapshotcontent <volumesnapshotcontent_name>Optional: If the
object is not successfully deleted, enter the following command to remove any finalizers for the leftover resource so that the delete operation can continue:VolumeSnapshotImportantOnly remove the finalizers if you are confident that there are no existing references from either persistent volume claims or volume snapshot contents to the
object. Even with theVolumeSnapshotoption, the delete operation does not delete snapshot objects until all finalizers are removed.--force$ oc patch -n $PROJECT volumesnapshot/$NAME --type=merge -p '{"metadata": {"finalizers":null}}'Example output
volumesnapshotclass.snapshot.storage.k8s.io "csi-ocs-rbd-snapclass" deletedThe finalizers are removed and the volume snapshot is deleted.
6.4.7. Restoring a volume snapshot Copiar enlaceEnlace copiado en el portapapeles!
The
VolumeSnapshot
After your
VolumeSnapshot
readyToUse
true
Procedure
Specify a
data source on a PVC as shown in the following:VolumeSnapshotpvc-restore.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim-restore spec: storageClassName: csi-hostpath-sc dataSource: name: mysnap1 kind: VolumeSnapshot2 apiGroup: snapshot.storage.k8s.io3 accessModes: - ReadWriteOnce resources: requests: storage: 1GiCreate a PVC by entering the following command:
$ oc create -f pvc-restore.yamlVerify that the restored PVC has been created by entering the following command:
$ oc get pvcA new PVC such as
is displayed.myclaim-restore
6.5. CSI volume cloning Copiar enlaceEnlace copiado en el portapapeles!
Volume cloning duplicates an existing persistent volume to help protect against data loss in OpenShift Container Platform. This feature is only available with supported Container Storage Interface (CSI) drivers. You should be familiar with persistent volumes before you provision a CSI volume clone.
6.5.1. Overview of CSI volume cloning Copiar enlaceEnlace copiado en el portapapeles!
A Container Storage Interface (CSI) volume clone is a duplicate of an existing persistent volume at a particular point in time.
Volume cloning is similar to volume snapshots, although it is more efficient. For example, a cluster administrator can duplicate a cluster volume by creating another instance of the existing cluster volume.
Cloning creates an exact duplicate of the specified volume on the back-end device, rather than creating a new empty volume. After dynamic provisioning, you can use a volume clone just as you would use any standard volume.
No new API objects are required for cloning. The existing
dataSource
PersistentVolumeClaim
6.5.1.1. Support limitations Copiar enlaceEnlace copiado en el portapapeles!
By default, OpenShift Container Platform supports CSI volume cloning with these limitations:
- The destination persistent volume claim (PVC) must exist in the same namespace as the source PVC.
Cloning is supported with a different Storage Class.
- Destination volume can be the same for a different storage class as the source.
-
You can use the default storage class and omit in the
storageClassName.spec
- Support is only available for CSI drivers. In-tree and FlexVolumes are not supported.
- CSI drivers might not have implemented the volume cloning functionality. For details, see the CSI driver documentation.
6.5.2. Provisioning a CSI volume clone Copiar enlaceEnlace copiado en el portapapeles!
When you create a cloned persistent volume claim (PVC) API object, you trigger the provisioning of a CSI volume clone. The clone pre-populates with the contents of another PVC, adhering to the same rules as any other persistent volume. The one exception is that you must add a
dataSource
Prerequisites
- You are logged in to a running OpenShift Container Platform cluster.
- Your PVC is created using a CSI driver that supports volume cloning.
- Your storage back end is configured for dynamic provisioning. Cloning support is not available for static provisioners.
Procedure
To clone a PVC from an existing PVC:
Create and save a file with the
object described by the following YAML:PersistentVolumeClaimpvc-clone.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-1-clone namespace: mynamespace spec: storageClassName: csi-cloning1 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi dataSource: kind: PersistentVolumeClaim name: pvc-1- 1
- The name of the storage class that provisions the storage back end. The default storage class can be used and
storageClassNamecan be omitted in the spec.
Create the object you saved in the previous step by running the following command:
$ oc create -f pvc-clone.yamlA new PVC
is created.pvc-1-cloneVerify that the volume clone was created and is ready by running the following command:
$ oc get pvc pvc-1-cloneThe
shows that it ispvc-1-clone.BoundYou are now ready to use the newly cloned PVC to configure a pod.
Create and save a file with the
object described by the YAML. For example:Podkind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: "/var/www/html" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: pvc-1-clone1 - 1
- The cloned PVC created during the CSI volume cloning operation.
The created
object is now ready to consume, clone, snapshot, or delete your cloned PVC independently of its originalPodPVC.dataSource
6.6. CSI automatic migration Copiar enlaceEnlace copiado en el portapapeles!
In-tree storage drivers that are traditionally shipped with OpenShift Container Platform are being deprecated and replaced by their equivalent Container Storage Interface (CSI) drivers. OpenShift Container Platform provides automatic migration for certain supported in-tree volume plugins to their equivalent CSI drivers.
6.6.1. Overview Copiar enlaceEnlace copiado en el portapapeles!
Volumes that are provisioned by using in-tree storage plugins, and that are supported by this feature, are migrated to their counterpart Container Storage Interface (CSI) drivers. This process does not perform any data migration; OpenShift Container Platform only translates the persistent volume object in memory. As a result, the translated persistent volume object is not stored on disk, nor is its contents changed.
The following in-tree to CSI drivers are supported:
| In-tree/CSI drivers | Support level | CSI auto migration enabled automatically? |
|---|---|---|
|
Generally available (GA) |
Yes. For more information, see Automatic migration of in-tree volumes to CSI. |
|
Technology Preview (TP) | No. To enable, see Manually enabling CSI automatic migration. Also, for vSphere, see information below in:
|
CSI automatic migration should be seamless. This feature does not change how you use all existing API objects: for example,
PersistentVolumes
PersistentVolumeClaims
StorageClasses
Enabling CSI automatic migration for in-tree persistent volumes (PVs) or persistent volume claims (PVCs) does not enable any new CSI driver features, such as snapshots or expansion, if the original in-tree storage plugin did not support it.
6.6.2. Automatic migration of in-tree volumes to CSI Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform supports automatic and seamless migration for the following in-tree volume types to their Container Storage Interface (CSI) driver counterpart:
- Azure Disk
- OpenStack Cinder
- Amazon Web Services (AWS) Elastic Block Storage (EBS)
- Google Compute Engine Persistent Disk (GCP PD)
CSI migration for these volume types is considered generally available (GA), and requires no manual intervention.
For new OpenShift Container Platform 4.11, and later, installations, the default storage class is the CSI storage class. All volumes provisioned using this storage class are CSI persistent volumes (PVs).
For clusters upgraded from 4.10, and earlier, to 4.11, and later, the CSI storage class is created, and is set as the default if no default storage class was set prior to the upgrade. In the very unlikely case that there is a storage class with the same name, the existing storage class remains unchanged. Any existing in-tree storage classes remain, and might be necessary for certain features, such as volume expansion to work for existing in-tree PVs. While storage class referencing to the in-tree storage plugin will continue working, we recommend that you switch the default storage class to the CSI storage class.
6.6.3. Manually enabling CSI automatic migration Copiar enlaceEnlace copiado en el portapapeles!
If you want to test Container Storage Interface (CSI) migration in development or staging OpenShift Container Platform clusters, you must manually enable in-tree to CSI migration for the following in-tree volume types:
- VMware vSphere Disk
- Azure File
CSI automatic migration for the preceding in-tree volume plugins and CSI driver pairs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
After migration, the default storage class remains the in-tree storage class.
CSI automatic migration will be enabled by default for all storage in-tree plugins in a future OpenShift Container Platform release, so it is highly recommended that you test it now and report any issues.
Enabling CSI automatic migration drains, and then restarts, all nodes in the cluster in sequence. This might take some time.
Procedure
Enable feature gates (see Nodes → Working with clusters → Enabling features using feature gates).
ImportantAfter turning on Technology Preview features using feature gates, they cannot be turned off. As a result, cluster upgrades are prevented.
The following configuration example enables CSI automatic migration for all CSI drivers supported by this feature that are currently in Technology Preview (TP) status:
apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade1 ...- 1
- Enables automatic migration for Azure File and VMware vSphere.
You can specify CSI automatic migration for a selected CSI driver by setting
CustomNoUpgradeand forfeatureSetto one of the following:featuregates- CSIMigrationAzureFile
- CSIMigrationvSphere
The following configuration example enables automatic migration to the vSphere CSI driver only:
apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: CustomNoUpgrade customNoUpgrade: enabled: - CSIMigrationvSphere1 ...- 1
- Enables automatic migration for vSphere only.
6.6.4. Updating from OpenShift Container Platform 4.12 to 4.13 with vSphere in-tree PVs Copiar enlaceEnlace copiado en el portapapeles!
If you are using vSphere in-tree persistent volumes (PVs) and want to update from OpenShift Container Platform 4.12 to 4.13, update vSphere vCenter and ESXI host to 7.0 Update 3L or 8.0 Update 2, otherwise the OpenShift Container Platform upgrade is blocked. After updating vSphere, your OpenShift Container Platform update can occur and automatic Container Storage Interface (CSI) migration for vSphere occurs only if you opt in.
Alternatively, if you do not want to update vSphere, you can proceed with an OpenShift Container Platform update by running the following command to perform an administrator acknowledgment:
oc -n openshift-config patch cm admin-acks --patch '{"data":{"ack-4.12-kube-126-vsphere-migration-in-4.14":"true"}}' --type=merge
It is generally safe to provide the requested administrator acknowledgment for updates from OpenShift Container Platform 4.12 to 4.13 because CSI migration is not yet enabled for upgraded clusters from 4.12 to 4.13. However, Red Hat recommends that you start planning an update of your vSphere environment for a future update to 4.14, so that all the in-tree volumes can be managed by the CSI driver seamlessly.
If you do not update to OpenShift Container Platform 4.13.10, or later, and do not update vSphere, and then opt in to migration (see Manually enabling CSI automatic migration under Additional resources), known issues can occur. Before opting in to migration, carefully read this knowledge base article.
6.6.5. Updating from OpenShift Container Platform 4.12 to 4.14 with vSphere in-tree PVs Copiar enlaceEnlace copiado en el portapapeles!
If you are using vSphere in-tree persistent volumes (PVs) and want to update from OpenShift Container Platform 4.12 to 4.14, update vSphere vCenter and ESXI host to 7.0 Update 3L or 8.0 Update 2, otherwise the OpenShift Container Platform updates are blocked. After updating vSphere, your OpenShift Container Platform update can occur and automatic Container Storage Interface (CSI) migration for vSphere occurs automatically by default.
Alternatively, if you do not want to update vSphere, you can proceed with an OpenShift Container Platform update by running both of the following commands to perform an administrator acknowledgment:
oc -n openshift-config patch cm admin-acks --patch '{"data":{"ack-4.12-kube-126-vsphere-migration-in-4.14":"true"}}' --type=merge
oc -n openshift-config patch cm admin-acks --patch '{"data":{"ack-4.13-kube-127-vsphere-migration-in-4.14":"true"}}' --type=merge
If you update to OpenShift Container Platform 4.14 without updating vSphere, known issues can occur due to CSI migration being enabled by default in OpenShift Container Platform 4.14. Before updating, carefully read this knowledge base article.
Updating from OpenShift Container Platform 4.12 to 4.14 is an Extended Update Support (EUS)-to-EUS update. To understand the ramifications for this type of update and how to perform it, see the EUS-to-EUS update link in the Additional resources section below.
6.7. AliCloud Disk CSI Driver Operator Copiar enlaceEnlace copiado en el portapapeles!
6.7.1. Overview Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Alibaba AliCloud Disk Storage.
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.
To create CSI-provisioned PVs that mount to AliCloud Disk storage assets, OpenShift Container Platform installs the AliCloud Disk CSI Driver Operator and the AliCloud Disk CSI driver, by default, in the
openshift-cluster-csi-drivers
-
The AliCloud Disk CSI Driver Operator provides a storage class () that you can use to create persistent volume claims (PVCs). The AliCloud Disk CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on demand, eliminating the need for cluster administrators to pre-provision storage.
alicloud-disk - The AliCloud Disk CSI driver enables you to create and mount AliCloud Disk PVs.
6.7.2. About CSI Copiar enlaceEnlace copiado en el portapapeles!
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
Additional resources
6.8. AWS Elastic Block Store CSI Driver Operator Copiar enlaceEnlace copiado en el portapapeles!
6.8.1. Overview Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AWS Elastic Block Store (EBS).
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver.
To create CSI-provisioned PVs that mount to AWS EBS storage assets, OpenShift Container Platform installs the AWS EBS CSI Driver Operator and the AWS EBS CSI driver by default in the
openshift-cluster-csi-drivers
- The AWS EBS CSI Driver Operator provides a StorageClass by default that you can use to create PVCs. You also have the option to create the AWS EBS StorageClass as described in Persistent storage using AWS Elastic Block Store.
- The AWS EBS CSI driver enables you to create and mount AWS EBS PVs.
If you installed the AWS EBS CSI Operator and driver on an OpenShift Container Platform 4.5 cluster, you must uninstall the 4.5 Operator and driver before you update to OpenShift Container Platform 4.12.
6.8.2. About CSI Copiar enlaceEnlace copiado en el portapapeles!
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision AWS EBS storage.
In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plug-ins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform.
For information about dynamically provisioning AWS EBS persistent volumes in OpenShift Container Platform, see Persistent storage using AWS Elastic Block Store.
6.9. AWS Elastic File Service CSI Driver Operator Copiar enlaceEnlace copiado en el portapapeles!
6.9.1. Overview Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AWS Elastic File Service (EFS).
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.
After installing the AWS EFS CSI Driver Operator, OpenShift Container Platform installs the AWS EFS CSI Operator and the AWS EFS CSI driver by default in the
openshift-cluster-csi-drivers
-
The AWS EFS CSI Driver Operator, after being installed, does not create a storage class by default to use to create persistent volume claims (PVCs). However, you can manually create the AWS EFS . The AWS EFS CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand. This eliminates the need for cluster administrators to pre-provision storage.
StorageClass - The AWS EFS CSI driver enables you to create and mount AWS EFS PVs.
AWS EFS only supports regional volumes, not zonal volumes.
6.9.2. About CSI Copiar enlaceEnlace copiado en el portapapeles!
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
6.9.3. Setting up the AWS EFS CSI Driver Operator Copiar enlaceEnlace copiado en el portapapeles!
- Install the AWS EFS CSI Driver Operator.
- Install the AWS EFS CSI Driver.
6.9.3.1. Installing the AWS EFS CSI Driver Operator Copiar enlaceEnlace copiado en el portapapeles!
The AWS EFS CSI Driver Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install and configure the AWS EFS CSI Driver Operator in your cluster.
Prerequisites
- Access to the OpenShift Container Platform web console.
Procedure
To install the AWS EFS CSI Driver Operator from the web console:
- Log in to the web console.
Install the AWS EFS CSI Operator:
- Click Operators → OperatorHub.
- Locate the AWS EFS CSI Operator by typing AWS EFS CSI in the filter box.
Click the AWS EFS CSI Driver Operator button.
ImportantBe sure to select the AWS EFS CSI Driver Operator and not the AWS EFS Operator. The AWS EFS Operator is a community Operator and is not supported by Red Hat.
- On the AWS EFS CSI Driver Operator page, click Install.
On the Install Operator page, ensure that:
- All namespaces on the cluster (default) is selected.
- Installed Namespace is set to openshift-cluster-csi-drivers.
Click Install.
After the installation finishes, the AWS EFS CSI Operator is listed in the Installed Operators section of the web console.
Next steps
- If you are using AWS EFS with AWS Secure Token Service (STS), you must configure the AWS EFS CSI Driver with STS. For more information, see Configuring AWS EFS CSI Driver with STS.
6.9.3.2. Configuring AWS EFS CSI Driver Operator with Security Token Service Copiar enlaceEnlace copiado en el portapapeles!
This procedure explains how to configure the AWS EFS CSI Driver Operator with OpenShift Container Platform on AWS Security Token Service (STS).
Perform this procedure before you have installed the AWS EFS CSI Operator, but not yet installed the AWS EFS CSI driver as part of the Installing the AWS EFS CSI Driver Operator procedure.
If you perform this procedure after installing the driver and creating volumes, your volumes will fail to mount into pods.
Prerequisites
- You have access to the cluster as a user with the cluster-admin role.
- AWS account credentials
- You have installed the AWS EFS CSI Operator.
Procedure
To configure the AWS EFS CSI Driver Operator with STS:
-
Extract the CCO utility () binary from the OpenShift Container Platform release image, which you used to install the cluster with STS. For more information, see "Configuring the Cloud Credential Operator utility".
ccoctl Create and save an EFS
YAML file, such as shown in the following example, and then place it in theCredentialsRequestdirectory:credrequestsExample
apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: openshift-aws-efs-csi-driver namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - elasticfilesystem:* effect: Allow resource: '*' secretRef: name: aws-efs-cloud-credentials namespace: openshift-cluster-csi-drivers serviceAccountNames: - aws-efs-csi-driver-operator - aws-efs-csi-driver-controller-saRun the
tool to generate a new IAM role in AWS, and create a YAML file for it in the local file system (ccoctl).<path_to_ccoctl_output_dir>/manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml$ ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com-
is the name used to tag any cloud resources that are created for tracking.
name=<name> -
is the AWS region where cloud resources are created.
region=<aws_region> -
is the directory containing the EFS CredentialsRequest file in previous step.
dir=<path_to_directory_with_list_of_credentials_requests>/credrequests - is the AWS account ID.
<aws_account_id>Example
$ ccoctl aws create-iam-roles --name my-aws-efs --credentials-requests-dir credrequests --identity-provider-arn arn:aws:iam::123456789012:oidc-provider/my-aws-efs-oidc.s3.us-east-2.amazonaws.comExample output
2022/03/21 06:24:44 Role arn:aws:iam::123456789012:role/my-aws-efs -openshift-cluster-csi-drivers-aws-efs-cloud- created 2022/03/21 06:24:44 Saved credentials configuration to: /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml 2022/03/21 06:24:45 Updated Role policy for Role my-aws-efs-openshift-cluster-csi-drivers-aws-efs-cloud-
-
Create the AWS EFS cloud credentials and secret:
$ oc create -f <path_to_ccoctl_output_dir>/manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yamlExample
$ oc create -f /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yamlExample output
secret/aws-efs-cloud-credentials created
6.9.3.3. Installing the AWS EFS CSI Driver Copiar enlaceEnlace copiado en el portapapeles!
Prerequisites
- Access to the OpenShift Container Platform web console.
Procedure
- Click Administration → CustomResourceDefinitions → ClusterCSIDriver.
- On the Instances tab, click Create ClusterCSIDriver.
Use the following YAML file:
apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: efs.csi.aws.com spec: managementState: Managed- Click Create.
Wait for the following Conditions to change to a "True" status:
- AWSEFSDriverNodeServiceControllerAvailable
- AWSEFSDriverControllerServiceControllerAvailable
6.9.4. Creating the AWS EFS storage class Copiar enlaceEnlace copiado en el portapapeles!
Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.
The AWS EFS CSI Driver Operator, after being installed, does not create a storage class by default. However, you can manually create the AWS EFS storage class.
6.9.4.1. Creating the AWS EFS storage class using the console Copiar enlaceEnlace copiado en el portapapeles!
Procedure
- In the OpenShift Container Platform console, click Storage → StorageClasses.
- On the StorageClasses page, click Create StorageClass.
On the StorageClass page, perform the following steps:
- Enter a name to reference the storage class.
- Optional: Enter the description.
- Select the reclaim policy.
-
Select
efs.csi.aws.comfrom the Provisioner drop-down list. - Optional: Set the configuration parameters for the selected provisioner.
- Click Create.
6.9.4.2. Creating the AWS EFS storage class using the CLI Copiar enlaceEnlace copiado en el portapapeles!
Procedure
Create a
object:StorageClasskind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap1 fileSystemId: fs-a53249112 directoryPerms: "700"3 gidRangeStart: "1000"4 gidRangeEnd: "2000"5 basePath: "/dynamic_provisioning"6 - 1
provisioningModemust beefs-apto enable dynamic provisioning.- 2
fileSystemIdmust be the ID of the EFS volume created manually.- 3
directoryPermsis the default permission of the root directory of the volume. In this example, the volume is accessible only by the owner.- 4 5
gidRangeStartandgidRangeEndset the range of POSIX Group IDs (GIDs) that are used to set the GID of the AWS access point. If not specified, the default range is 50000-7000000. Each provisioned volume, and thus AWS access point, is assigned a unique GID from this range.- 6
basePathis the directory on the EFS volume that is used to create dynamically provisioned volumes. In this case, a PV is provisioned as “/dynamic_provisioning/<random uuid>” on the EFS volume. Only the subdirectory is mounted to pods that use the PV.
NoteA cluster admin can create several
objects, each using a different EFS volume.StorageClass
6.9.5. Creating and configuring access to EFS volumes in AWS Copiar enlaceEnlace copiado en el portapapeles!
This procedure explains how to create and configure EFS volumes in AWS so that you can use them in OpenShift Container Platform.
Prerequisites
- AWS account credentials
Procedure
To create and configure access to an EFS volume in AWS:
- On the AWS console, open https://console.aws.amazon.com/efs.
Click Create file system:
- Enter a name for the file system.
- For Virtual Private Cloud (VPC), select your OpenShift Container Platform’s' virtual private cloud (VPC).
- Accept default settings for all other selections.
Wait for the volume and mount targets to finish being fully created:
- Go to https://console.aws.amazon.com/efs#/file-systems.
- Click your volume, and on the Network tab wait for all mount targets to become available (~1-2 minutes).
- On the Network tab, copy the Security Group ID (you will need this in the next step).
- Go to https://console.aws.amazon.com/ec2/v2/home#SecurityGroups, and find the Security Group used by the EFS volume.
On the Inbound rules tab, click Edit inbound rules, and then add a new rule with the following settings to allow OpenShift Container Platform nodes to access EFS volumes :
- Type: NFS
- Protocol: TCP
- Port range: 2049
Source: Custom/IP address range of your nodes (for example: “10.0.0.0/16”)
This step allows OpenShift Container Platform to use NFS ports from the cluster.
- Save the rule.
6.9.6. Dynamic provisioning for AWS EFS Copiar enlaceEnlace copiado en el portapapeles!
The AWS EFS CSI Driver supports a different form of dynamic provisioning than other CSI drivers. It provisions new PVs as subdirectories of a pre-existing EFS volume. The PVs are independent of each other. However, they all share the same EFS volume. When the volume is deleted, all PVs provisioned out of it are deleted too. The EFS CSI driver creates an AWS Access Point for each such subdirectory. Due to AWS AccessPoint limits, you can only dynamically provision 1000 PVs from a single
StorageClass
Note that
PVC.spec.resources
In the example below, you request 5 GiB of space. However, the created PV is limitless and can store any amount of data (like petabytes). A broken application, or even a rogue application, can cause significant expenses when it stores too much data on the volume.
Using monitoring of EFS volume sizes in AWS is strongly recommended.
Prerequisites
- You have created AWS EFS volumes.
- You have created the AWS EFS storage class.
Procedure
To enable dynamic provisioning:
Create a PVC (or StatefulSet or Template) as usual, referring to the
created previously.StorageClassapiVersion: v1 kind: PersistentVolumeClaim metadata: name: test spec: storageClassName: efs-sc accessModes: - ReadWriteMany resources: requests: storage: 5Gi
If you have problems setting up dynamic provisioning, see AWS EFS troubleshooting.
6.9.7. Creating static PVs with AWS EFS Copiar enlaceEnlace copiado en el portapapeles!
It is possible to use an AWS EFS volume as a single PV without any dynamic provisioning. The whole volume is mounted to pods.
Prerequisites
- You have created AWS EFS volumes.
Procedure
Create the PV using the following YAML file:
apiVersion: v1 kind: PersistentVolume metadata: name: efs-pv spec: capacity:1 storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany - ReadWriteOnce persistentVolumeReclaimPolicy: Retain csi: driver: efs.csi.aws.com volumeHandle: fs-ae66151a2 volumeAttributes: encryptInTransit: "false"3 - 1
spec.capacitydoes not have any meaning and is ignored by the CSI driver. It is used only when binding to a PVC. Applications can store any amount of data to the volume.- 2
volumeHandlemust be the same ID as the EFS volume you created in AWS. If you are providing your own access point,volumeHandleshould be<EFS volume ID>::<access point ID>. For example:fs-6e633ada::fsap-081a1d293f0004630.- 3
- If desired, you can disable encryption in transit. Encryption is enabled by default.
If you have problems setting up static PVs, see AWS EFS troubleshooting.
6.9.8. AWS EFS security Copiar enlaceEnlace copiado en el portapapeles!
The following information is important for AWS EFS security.
When using access points, for example, by using dynamic provisioning as described earlier, Amazon automatically replaces GIDs on files with the GID of the access point. In addition, EFS considers the user ID, group ID, and secondary group IDs of the access point when evaluating file system permissions. EFS ignores the NFS client’s IDs. For more information about access points, see https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html.
As a consequence, EFS volumes silently ignore FSGroup; OpenShift Container Platform is not able to replace the GIDs of files on the volume with FSGroup. Any pod that can access a mounted EFS access point can access any file on it.
Unrelated to this, encryption in transit is enabled by default. For more information, see https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html.
6.9.9. AWS EFS troubleshooting Copiar enlaceEnlace copiado en el portapapeles!
The following information provides guidance on how to troubleshoot issues with AWS EFS:
-
The AWS EFS Operator and CSI driver run in namespace .
openshift-cluster-csi-drivers To initiate gathering of logs of the AWS EFS Operator and CSI driver, run the following command:
$ oc adm must-gather [must-gather ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 [must-gather ] OUT namespace/openshift-must-gather-xm4wq created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-2bd8x created [must-gather ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 createdTo show AWS EFS Operator errors, view the
status:ClusterCSIDriver$ oc get clustercsidriver efs.csi.aws.com -o yamlIf a volume cannot be mounted to a pod (as shown in the output of the following command):
$ oc describe pod ... Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m13s default-scheduler Successfully assigned default/efs-app to ip-10-0-135-94.ec2.internal Warning FailedMount 13s kubelet MountVolume.SetUp failed for volume "pvc-d7c097e6-67ec-4fae-b968-7e7056796449" : rpc error: code = DeadlineExceeded desc = context deadline exceeded1 Warning FailedMount 10s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-9j477]: timed out waiting for the condition- 1
- Warning message indicating volume not mounted.
This error is frequently caused by AWS dropping packets between an OpenShift Container Platform node and AWS EFS.
Check that the following are correct:
- AWS firewall and Security Groups
- Networking: port number and IP addresses
6.9.10. Uninstalling the AWS EFS CSI Driver Operator Copiar enlaceEnlace copiado en el portapapeles!
All EFS PVs are inaccessible after uninstalling the AWS EFS CSI Driver Operator.
Prerequisites
- Access to the OpenShift Container Platform web console.
Procedure
To uninstall the AWS EFS CSI Driver Operator from the web console:
- Log in to the web console.
- Stop all applications that use AWS EFS PVs.
Delete all AWS EFS PVs:
- Click Storage → PersistentVolumeClaims.
- Select each PVC that is in use by the AWS EFS CSI Driver Operator, click the drop-down menu on the far right of the PVC, and then click Delete PersistentVolumeClaims.
Uninstall the AWS EFS CSI Driver:
NoteBefore you can uninstall the Operator, you must remove the CSI driver first.
- Click Administration → CustomResourceDefinitions → ClusterCSIDriver.
- On the Instances tab, for efs.csi.aws.com, on the far left side, click the drop-down menu, and then click Delete ClusterCSIDriver.
- When prompted, click Delete.
Uninstall the AWS EFS CSI Operator:
- Click Operators → Installed Operators.
- On the Installed Operators page, scroll or type AWS EFS CSI into the Search by name box to find the Operator, and then click it.
- On the upper, right of the Installed Operators > Operator details page, click Actions → Uninstall Operator.
When prompted on the Uninstall Operator window, click the Uninstall button to remove the Operator from the namespace. Any applications deployed by the Operator on the cluster need to be cleaned up manually.
After uninstalling, the AWS EFS CSI Driver Operator is no longer listed in the Installed Operators section of the web console.
Before you can destroy a cluster (
openshift-install destroy cluster
6.10. Azure Disk CSI Driver Operator Copiar enlaceEnlace copiado en el portapapeles!
6.10.1. Overview Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Microsoft Azure Disk Storage.
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.
To create CSI-provisioned PVs that mount to Azure Disk storage assets, OpenShift Container Platform installs the Azure Disk CSI Driver Operator and the Azure Disk CSI driver by default in the
openshift-cluster-csi-drivers
-
The Azure Disk CSI Driver Operator provides a storage class named that you can use to create persistent volume claims (PVCs). The Azure Disk CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage.
managed-csi - The Azure Disk CSI driver enables you to create and mount Azure Disk PVs.
6.10.2. About CSI Copiar enlaceEnlace copiado en el portapapeles!
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
OpenShift Container Platform provides automatic migration for the Azure Disk in-tree volume plugin to its equivalent CSI driver. For more information, see CSI automatic migration.
6.10.3. Creating a storage class with storage account type Copiar enlaceEnlace copiado en el portapapeles!
Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, you can obtain dynamically provisioned persistent volumes.
When creating a storage class, you can designate the storage account type. This corresponds to your Azure storage account SKU tier. Valid options are
Standard_LRS
Premium_LRS
StandardSSD_LRS
UltraSSD_LRS
Premium_ZRS
StandardSSD_ZRS
ZRS has some region limitations. For information about these limitations, see ZRS limitations.
Prerequisites
- Access to an OpenShift Container Platform cluster with administrator rights
Procedure
Use the following steps to create a storage class with a storage account type.
Create a storage class designating the storage account type using a YAML file similar to the following:
$ oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class>1 provisioner: disk.csi.azure.com parameters: skuName: <storage-class-account-type>2 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true EOFEnsure that the storage class was created by listing the storage classes:
$ oc get storageclassExample output
$ oc get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE azurefile-csi file.csi.azure.com Delete Immediate true 68m managed-csi (default) disk.csi.azure.com Delete WaitForFirstConsumer true 68m sc-prem-zrs disk.csi.azure.com Delete WaitForFirstConsumer true 4m25s1 - 1
- New storage class with storage account type.
6.10.4. Machine sets that deploy machines with ultra disks using PVCs Copiar enlaceEnlace copiado en el portapapeles!
You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads.
Both the in-tree plugin and CSI driver support using PVCs to enable ultra disks. You can also deploy machines with ultra disks as data disks without creating a PVC.
6.10.4.1. Creating machines with ultra disks by using machine sets Copiar enlaceEnlace copiado en el portapapeles!
You can deploy machines with ultra disks on Azure by editing your machine set YAML file.
Prerequisites
- Have an existing Microsoft Azure cluster.
Procedure
Copy an existing Azure
custom resource (CR) and edit it by running the following command:MachineSet$ oc edit machineset <machine_set_name>where
is the machine set that you want to provision machines with ultra disks.<machine_set_name>Add the following lines in the positions indicated:
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd1 providerSpec: value: ultraSSDCapability: Enabled2 Create a machine set using the updated configuration by running the following command:
$ oc create -f <machine_set_name>.yamlCreate a storage class that contains the following YAML definition:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc1 parameters: cachingMode: None diskIopsReadWrite: "2000"2 diskMbpsReadWrite: "320"3 kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com4 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer5 - 1
- Specify the name of the storage class. This procedure uses
ultra-disk-scfor this value. - 2
- Specify the number of IOPS for the storage class.
- 3
- Specify the throughput in MBps for the storage class.
- 4
- For Azure Kubernetes Service (AKS) version 1.21 or later, use
disk.csi.azure.com. For earlier versions of AKS, usekubernetes.io/azure-disk. - 5
- Optional: Specify this parameter to wait for the creation of the pod that will use the disk.
Create a persistent volume claim (PVC) to reference the
storage class that contains the following YAML definition:ultra-disk-scapiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk1 spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc2 resources: requests: storage: 4Gi3 Create a pod that contains the following YAML definition:
apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd1 containers: - name: nginx-ultra image: alpine:latest command: - "sleep" - "infinity" volumeMounts: - mountPath: "/mnt/azure" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk2
Verification
Validate that the machines are created by running the following command:
$ oc get machinesThe machines should be in the
state.RunningFor a machine that is running and has a node attached, validate the partition by running the following command:
$ oc debug node/<node_name> -- chroot /host lsblkIn this command,
starts a debugging shell on the nodeoc debug node/<node_name>and passes a command with<node_name>. The passed command--provides access to the underlying host OS binaries, andchroot /hostshows the block devices that are attached to the host OS machine.lsblk
Next steps
To use an ultra disk from within a pod, create a workload that uses the mount point. Create a YAML file similar to the following example:
apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - name: lun0p1 mountPath: "/tmp" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd
6.10.4.2. Troubleshooting resources for machine sets that enable ultra disks Copiar enlaceEnlace copiado en el portapapeles!
Use the information in this section to understand and recover from issues you might encounter.
6.10.4.2.1. Unable to mount a persistent volume claim backed by an ultra disk Copiar enlaceEnlace copiado en el portapapeles!
If there is an issue mounting a persistent volume claim backed by an ultra disk, the pod becomes stuck in the
ContainerCreating
For example, if the
additionalCapabilities.ultraSSDEnabled
StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.
To resolve this issue, describe the pod by running the following command:
$ oc -n <stuck_pod_namespace> describe pod <stuck_pod_name>
6.11. Azure File CSI Driver Operator Copiar enlaceEnlace copiado en el portapapeles!
6.11.1. Overview Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform is capable of provisioning persistent volumes (PVs) by using the Container Storage Interface (CSI) driver for Microsoft Azure File Storage.
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.
To create CSI-provisioned PVs that mount to Azure File storage assets, OpenShift Container Platform installs the Azure File CSI Driver Operator and the Azure File CSI driver by default in the
openshift-cluster-csi-drivers
-
The Azure File CSI Driver Operator provides a storage class that is named that you can use to create persistent volume claims (PVCs).
azurefile-csi - The Azure File CSI driver enables you to create and mount Azure File PVs. The Azure File CSI driver supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage.
Azure File CSI Driver Operator does not support:
- Virtual hard disks (VHD)
- Network File System (NFS): OpenShift Container Platform does not deploy a NFS-backed storage class.
- Running on nodes with FIPS mode enabled.
For more information about supported features, see Supported CSI drivers and features.
6.11.2. NFS support Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform supports the Azure File Container Storage Interface (CSI) Driver Operator with Network File System (NFS) with the following restrictions:
Creating pods with Azure File NFS volumes that are scheduled to the control plane node causes the mount to be denied.
To work around this issue: If your control plane nodes are schedulable, and the pods can run on worker nodes, use
or Affinity to schedule the pod in worker nodes.nodeSelectorFS Group policy behavior:
ImportantAzure File CSI with NFS does not honor the
requested by pods. Azure File CSI with NFS applies a defaultfsGroupChangePolicyFS Group policy regardless of the policy requested by the pod.OnRootMismatchThe Azure File CSI Operator does not automatically create a storage class for NFS. You must create it manually. Use a file similar to the following:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name>1 provisioner: file.csi.azure.com2 parameters: protocol: nfs3 skuName: Premium_LRS # available values: Premium_LRS, Premium_ZRS mountOptions: - nconnect=4
6.11.3. About CSI Copiar enlaceEnlace copiado en el portapapeles!
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
Additional resources
6.12. Azure Stack Hub CSI Driver Operator Copiar enlaceEnlace copiado en el portapapeles!
6.12.1. Overview Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Azure Stack Hub Storage. Azure Stack Hub, which is part of the Azure Stack portfolio, allows you to run apps in an on-premise environment and deliver Azure services in your datacenter.
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.
To create CSI-provisioned PVs that mount to Azure Stack Hub storage assets, OpenShift Container Platform installs the Azure Stack Hub CSI Driver Operator and the Azure Stack Hub CSI driver by default in the
openshift-cluster-csi-drivers
-
The Azure Stack Hub CSI Driver Operator provides a storage class (), with "Standard_LRS" as the default storage account type, that you can use to create persistent volume claims (PVCs). The Azure Stack Hub CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage.
managed-csi - The Azure Stack Hub CSI driver enables you to create and mount Azure Stack Hub PVs.
6.12.2. About CSI Copiar enlaceEnlace copiado en el portapapeles!
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
6.13. GCP PD CSI Driver Operator Copiar enlaceEnlace copiado en el portapapeles!
6.13.1. Overview Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform can provision persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Google Cloud Platform (GCP) persistent disk (PD) storage.
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver.
To create CSI-provisioned persistent volumes (PVs) that mount to GCP PD storage assets, OpenShift Container Platform installs the GCP PD CSI Driver Operator and the GCP PD CSI driver by default in the
openshift-cluster-csi-drivers
- GCP PD CSI Driver Operator: By default, the Operator provides a storage class that you can use to create PVCs. You also have the option to create the GCP PD storage class as described in Persistent storage using GCE Persistent Disk.
- GCP PD driver: The driver enables you to create and mount GCP PD PVs.
OpenShift Container Platform provides automatic migration for the GCE Persistent Disk in-tree volume plugin to its equivalent CSI driver. For more information, see CSI automatic migration.
6.13.2. About CSI Copiar enlaceEnlace copiado en el portapapeles!
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
6.13.3. GCP PD CSI driver storage class parameters Copiar enlaceEnlace copiado en el portapapeles!
The Google Cloud Platform (GCP) persistent disk (PD) Container Storage Interface (CSI) driver uses the CSI
external-provisioner
CreateVolume
The GCP PD CSI driver uses the
csi.storage.k8s.io/fstype
| Parameter | Values | Default | Description |
|---|---|---|---|
|
|
|
| Allows you to choose between standard PVs or solid-state-drive PVs. The driver does not validate the value, thus all the possible values are accepted. |
|
|
|
| Allows you to choose between zonal or regional PVs. |
|
| Fully qualified resource identifier for the key to use to encrypt new disks. | Empty string | Uses customer-managed encryption keys (CMEK) to encrypt new disks. |
6.13.4. Creating a custom-encrypted persistent volume Copiar enlaceEnlace copiado en el portapapeles!
When you create a
PersistentVolumeClaim
PersistentVolume
For encryption, the newly attached PV that you create uses customer-managed encryption keys (CMEK) on a cluster by using a new or existing Google Cloud Key Management Service (KMS) key.
Prerequisites
- You are logged in to a running OpenShift Container Platform cluster.
- You have created a Cloud KMS key ring and key version.
For more information about CMEK and Cloud KMS resources, see Using customer-managed encryption keys (CMEK).
Procedure
To create a custom-encrypted PV, complete the following steps:
Create a storage class with the Cloud KMS key. The following example enables dynamic provisioning of encrypted volumes:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-gce-pd-cmek provisioner: pd.csi.storage.gke.io volumeBindingMode: "WaitForFirstConsumer" allowVolumeExpansion: true parameters: type: pd-standard disk-encryption-kms-key: projects/<key-project-id>/locations/<location>/keyRings/<key-ring>/cryptoKeys/<key>1 - 1
- This field must be the resource identifier for the key that will be used to encrypt new disks. Values are case-sensitive. For more information about providing key ID values, see Retrieving a resource’s ID and Getting a Cloud KMS resource ID.
NoteYou cannot add the
parameter to an existing storage class. However, you can delete the storage class and recreate it with the same name and a different set of parameters. If you do this, the provisioner of the existing class must bedisk-encryption-kms-key.pd.csi.storage.gke.ioDeploy the storage class on your OpenShift Container Platform cluster using the
command:oc$ oc describe storageclass csi-gce-pd-cmekExample output
Name: csi-gce-pd-cmek IsDefaultClass: No Annotations: None Provisioner: pd.csi.storage.gke.io Parameters: disk-encryption-kms-key=projects/key-project-id/locations/location/keyRings/ring-name/cryptoKeys/key-name,type=pd-standard AllowVolumeExpansion: true MountOptions: none ReclaimPolicy: Delete VolumeBindingMode: WaitForFirstConsumer Events: noneCreate a file named
that matches the name of your storage class object that you created in the previous step:pvc.yamlkind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ReadWriteOnce storageClassName: csi-gce-pd-cmek resources: requests: storage: 6GiNoteIf you marked the new storage class as default, you can omit the
field.storageClassNameApply the PVC on your cluster:
$ oc apply -f pvc.yamlGet the status of your PVC and verify that it is created and bound to a newly provisioned PV:
$ oc get pvcExample output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE podpvc Bound pvc-e36abf50-84f3-11e8-8538-42010a800002 10Gi RWO csi-gce-pd-cmek 9sNoteIf your storage class has the
field set tovolumeBindingMode, you must create a pod to use the PVC before you can verify it.WaitForFirstConsumer
Your CMEK-protected PV is now ready to use with your OpenShift Container Platform cluster.
6.14. Google Compute Platform Filestore CSI Driver Operator Copiar enlaceEnlace copiado en el portapapeles!
6.14.1. Overview Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Google Compute Platform (GCP) Filestore Storage.
GCP Filestore CSI Driver Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.
To create CSI-provisioned PVs that mount to GCP Filestore Storage assets, you install the GCP Filestore CSI Driver Operator and the GCP Filestore CSI driver in the
openshift-cluster-csi-drivers
- The GCP Filestore CSI Driver Operator does not provide a storage class by default, but you can create one if needed. The GCP Filestore CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on demand, eliminating the need for cluster administrators to pre-provision storage.
- The GCP Filestore CSI driver enables you to create and mount GCP Filestore PVs.
6.14.2. About CSI Copiar enlaceEnlace copiado en el portapapeles!
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
6.14.3. Installing the GCP Filestore CSI Driver Operator Copiar enlaceEnlace copiado en el portapapeles!
The Google Compute Platform (GCP) Filestore Container Storage Interface (CSI) Driver Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install the GCP Filestore CSI Driver Operator in your cluster.
Prerequisites
- Access to the OpenShift Container Platform web console.
Procedure
To install the GCP Filestore CSI Driver Operator from the web console:
- Log in to the web console.
Enable the Filestore API in the GCE project by running the following command:
$ gcloud services enable file.googleapis.com --project <my_gce_project>1 - 1
- Replace
<my_gce_project>with your Google Cloud project.
You can also do this using Google Cloud web console.
Install the GCP Filestore CSI Operator:
- Click Operators → OperatorHub.
- Locate the GCP Filestore CSI Operator by typing GCP Filestore in the filter box.
- Click the GCP Filestore CSI Driver Operator button.
- On the GCP Filestore CSI Driver Operator page, click Install.
On the Install Operator page, ensure that:
- All namespaces on the cluster (default) is selected.
- Installed Namespace is set to openshift-cluster-csi-drivers.
Click Install.
After the installation finishes, the GCP Filestore CSI Operator is listed in the Installed Operators section of the web console.
Install the GCP Filestore CSI Driver:
- Click administration → CustomResourceDefinitions → ClusterCSIDriver.
On the Instances tab, click Create ClusterCSIDriver.
Use the following YAML file:
apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: filestore.csi.storage.gke.io spec: managementState: Managed- Click Create.
Wait for the following Conditions to change to a "true" status:
- GCPFilestoreDriverCredentialsRequestControllerAvailable
- GCPFilestoreDriverNodeServiceControllerAvailable
- GCPFilestoreDriverControllerServiceControllerAvailable
6.14.4. Creating a storage class for GCP Filestore Storage Copiar enlaceEnlace copiado en el portapapeles!
After installing the Operator, you should create a storage class for dynamic provisioning of Google Compute Platform (GCP) Filestore volumes.
Prerequisites
- You are logged in to the running OpenShift Container Platform cluster.
Procedure
To create a storage class:
Create a storage class using the following example YAML file:
Example YAML file
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: filestore-csi provisioner: filestore.csi.storage.gke.io parameters: network: network-name1 allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer- 1
- Specify the name of the GCP virtual private cloud (VPC) network where Filestore instances should be created in.
Specify the name of the VPC network where Filestore instances should be created in.
It is recommended to specify the VPC network that the Filestore instances should be created in. If no VPC network is specified, the Container Storage Interface (CSI) driver tries to create the instances in the default VPC network of the project. On IPI installations, the VPC network name is typically the cluster name with the suffix "-network". However, on UPI installations, the VPC network name can be any value chosen by the user.
You can find out the VPC network name by inspecting the
objects with the following command:MachineSets$ oc -n openshift-machine-api get machinesets -o yaml | grep "network:" - network: gcp-filestore-network (...)In this example, the VPC network name in this cluster is "gcp-filestore-network".
6.14.5. Destroying clusters and GCP Filestore Copiar enlaceEnlace copiado en el portapapeles!
Typically, if you destroy a cluster, the OpenShift Container Platform installer deletes all of the cloud resources that belong to that cluster. However, when a cluster is destroyed, Google Compute Platform (GCP) Filestore instances are not automatically deleted, so you must manually delete all persistent volume claims (PVCs) that use the Filestore storage class before destroying the cluster.
Procedure
To delete all GCP Filestore PVCs:
List all PVCs that were created using the storage class
:filestore-csi$ oc get pvc -o json -A | jq -r '.items[] | select(.spec.storageClassName == "filestore-csi")Delete all of the PVCs listed by the previous command:
$ oc delete <pvc-name>1 - 1
- Replace <pvc-name> with the name of any PVC that you need to delete.
6.15. IBM VPC Block CSI Driver Operator Copiar enlaceEnlace copiado en el portapapeles!
6.15.1. Overview Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for IBM Virtual Private Cloud (VPC) Block Storage.
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.
To create CSI-provisioned PVs that mount to IBM VPC Block storage assets, OpenShift Container Platform installs the IBM VPC Block CSI Driver Operator and the IBM VPC Block CSI driver by default in the
openshift-cluster-csi-drivers
-
The IBM VPC Block CSI Driver Operator provides three storage classes named (default),
ibmc-vpc-block-10iops-tier, andibmc-vpc-block-5iops-tierfor different tiers that you can use to create persistent volume claims (PVCs). The IBM VPC Block CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on demand, eliminating the need for cluster administrators to pre-provision storage.ibmc-vpc-block-custom - The IBM VPC Block CSI driver enables you to create and mount IBM VPC Block PVs.
6.15.2. About CSI Copiar enlaceEnlace copiado en el portapapeles!
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
Additional resources
6.16. OpenStack Cinder CSI Driver Operator Copiar enlaceEnlace copiado en el portapapeles!
6.16.1. Overview Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for OpenStack Cinder.
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver.
To create CSI-provisioned PVs that mount to OpenStack Cinder storage assets, OpenShift Container Platform installs the OpenStack Cinder CSI Driver Operator and the OpenStack Cinder CSI driver in the
openshift-cluster-csi-drivers
- The OpenStack Cinder CSI Driver Operator provides a CSI storage class that you can use to create PVCs.
- The OpenStack Cinder CSI driver enables you to create and mount OpenStack Cinder PVs.
OpenShift Container Platform provides automatic migration for the Cinder in-tree volume plugin to its equivalent CSI driver. For more information, see CSI automatic migration.
6.16.2. About CSI Copiar enlaceEnlace copiado en el portapapeles!
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision Cinder storage.
In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform.
6.16.3. Making OpenStack Cinder CSI the default storage class Copiar enlaceEnlace copiado en el portapapeles!
The OpenStack Cinder CSI driver uses the
cinder.csi.openstack.org
To enable OpenStack Cinder CSI provisioning in OpenShift Container Platform, it is recommended that you overwrite the default in-tree storage class with
standard-csi
In OpenShift Container Platform, the default storage class references the in-tree Cinder driver. However, with CSI automatic migration enabled, volumes created using the default storage class actually use the CSI driver.
Procedure
Use the following steps to apply the
standard-csi
List the storage class:
$ oc get storageclassExample output
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h standard-csi kubernetes.io/cinder Delete WaitForFirstConsumer true 46hChange the value of the annotation
tostorageclass.kubernetes.io/is-default-classfor the default storage class, as shown in the following example:false$ oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'Make another storage class the default by adding or modifying the annotation as
.storageclass.kubernetes.io/is-default-class=true$ oc patch storageclass standard-csi -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'Verify that the PVC is now referencing the CSI storage class by default:
$ oc get storageclassExample output
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard kubernetes.io/cinder Delete WaitForFirstConsumer true 46h standard-csi(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46hOptional: You can define a new PVC without having to specify the storage class:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cinder-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1GiA PVC that does not specify a specific storage class is automatically provisioned by using the default storage class.
Optional: After the new file has been configured, create it in your cluster:
$ oc create -f cinder-claim.yaml
6.17. OpenStack Manila CSI Driver Operator Copiar enlaceEnlace copiado en el portapapeles!
6.17.1. Overview Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for the OpenStack Manila shared file system service.
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver.
To create CSI-provisioned PVs that mount to Manila storage assets, OpenShift Container Platform installs the Manila CSI Driver Operator and the Manila CSI driver by default on any OpenStack cluster that has the Manila service enabled.
-
The Manila CSI Driver Operator creates the required storage class that is needed to create PVCs for all available Manila share types. The Operator is installed in the namespace.
openshift-cluster-csi-drivers -
The Manila CSI driver enables you to create and mount Manila PVs. The driver is installed in the namespace.
openshift-manila-csi-driver
6.17.2. About CSI Copiar enlaceEnlace copiado en el portapapeles!
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
6.17.3. Manila CSI Driver Operator limitations Copiar enlaceEnlace copiado en el portapapeles!
The following limitations apply to the Manila Container Storage Interface (CSI) Driver Operator:
- Only NFS is supported
- OpenStack Manila supports many network-attached storage protocols, such as NFS, CIFS, and CEPHFS, and these can be selectively enabled in the OpenStack cloud. The Manila CSI Driver Operator in OpenShift Container Platform only supports using the NFS protocol. If NFS is not available and enabled in the underlying OpenStack cloud, you cannot use the Manila CSI Driver Operator to provision storage for OpenShift Container Platform.
- Snapshots are not supported if the back end is CephFS-NFS
-
To take snapshots of persistent volumes (PVs) and revert volumes to snapshots, you must ensure that the Manila share type that you are using supports these features. A Red Hat OpenStack administrator must enable support for snapshots (
share type extra-spec snapshot_support) and for creating shares from snapshots (share type extra-spec create_share_from_snapshot_support) in the share type associated with the storage class you intend to use. - FSGroups are not supported
-
Since Manila CSI provides shared file systems for access by multiple readers and multiple writers, it does not support the use of FSGroups. This is true even for persistent volumes created with the ReadWriteOnce access mode. It is therefore important not to specify the
fsTypeattribute in any storage class that you manually create for use with Manila CSI Driver.
In Red Hat OpenStack Platform 16.x and 17.x, the Shared File Systems service (Manila) with CephFS through NFS fully supports serving shares to OpenShift Container Platform through the Manila CSI. However, this solution is not intended for massive scale. Be sure to review important recommendations in CephFS NFS Manila-CSI Workload Recommendations for Red Hat OpenStack Platform.
6.17.4. Dynamically provisioning Manila CSI volumes Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform installs a storage class for each available Manila share type.
The YAML files that are created are completely decoupled from Manila and from its Container Storage Interface (CSI) plugin. As an application developer, you can dynamically provision ReadWriteMany (RWX) storage and deploy pods with applications that safely consume the storage using YAML manifests.
You can use the same pod and persistent volume claim (PVC) definitions on-premise that you use with OpenShift Container Platform on AWS, Google Cloud, Azure, and other platforms, with the exception of the storage class reference in the PVC definition.
By default the access-rule assigned to a volume is set to 0.0.0.0/0. To limit the clients that can mount the persistent volume (PV), create a new storage class with an IP or a subnet mask in the
nfs-shareClient
Manila service is optional. If the service is not enabled in Red Hat OpenStack Platform (RHOSP), the Manila CSI driver is not installed and the storage classes for Manila are not created.
Prerequisites
- RHOSP is deployed with appropriate Manila share infrastructure so that it can be used to dynamically provision and mount volumes in OpenShift Container Platform.
Procedure (UI)
To dynamically create a Manila CSI volume using the web console:
- In the OpenShift Container Platform console, click Storage → Persistent Volume Claims.
- In the persistent volume claims overview, click Create Persistent Volume Claim.
Define the required options on the resulting page.
- Select the appropriate storage class.
- Enter a unique name for the storage claim.
Select the access mode to specify read and write access for the PVC you are creating.
ImportantUse RWX if you want the PV that fulfills this PVC to be mounted to multiple pods on multiple nodes in the cluster.
- Define the size of the storage claim.
- Click Create to create the PVC and generate a PV.
Procedure (CLI)
To dynamically create a Manila CSI volume using the command-line interface (CLI):
Create and save a file with the
object described by the following YAML:PersistentVolumeClaimpvc-manila.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-manila spec: accessModes:1 - ReadWriteMany resources: requests: storage: 10Gi storageClassName: csi-manila-gold2 Create the object you saved in the previous step by running the following command:
$ oc create -f pvc-manila.yamlA new PVC is created.
To verify that the volume was created and is ready, run the following command:
$ oc get pvc pvc-manilaThe
shows that it ispvc-manila.Bound
You can now use the new PVC to configure a pod.
6.18. Red Hat Virtualization CSI Driver Operator Copiar enlaceEnlace copiado en el portapapeles!
6.18.1. Overview Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Red Hat Virtualization (RHV).
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver.
To create CSI-provisioned PVs that mount to RHV storage assets, OpenShift Container Platform installs the oVirt CSI Driver Operator and the oVirt CSI driver by default in the
openshift-cluster-csi-drivers
-
The oVirt CSI Driver Operator provides a default object that you can use to create Persistent Volume Claims (PVCs).
StorageClass - The oVirt CSI driver enables you to create and mount oVirt PVs.
6.18.2. About CSI Copiar enlaceEnlace copiado en el portapapeles!
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
The oVirt CSI driver does not support snapshots.
6.18.3. Red Hat Virtualization (RHV) CSI driver storage class Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform creates a default object of type
StorageClass
ovirt-csi-sc
To create additional storage classes for different configurations, create and save a file with the
StorageClass
ovirt-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: <storage_class_name>
annotations:
storageclass.kubernetes.io/is-default-class: "<boolean>"
provisioner: csi.ovirt.org
allowVolumeExpansion: <boolean>
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
storageDomainName: <rhv-storage-domain-name>
thinProvisioning: "<boolean>"
csi.storage.k8s.io/fstype: <file_system_type>
- 1
- Name of the storage class.
- 2
- Set to
falseif the storage class is the default storage class in the cluster. If set totrue, the existing default storage class must be edited and set tofalse. - 3
trueenables dynamic volume expansion,falseprevents it.trueis recommended.- 4
- Dynamically provisioned persistent volumes of this storage class are created with this reclaim policy. This default policy is
Delete. - 5
- Indicates how to provision and bind
PersistentVolumeClaims. When not set,VolumeBindingImmediateis used. This field is only applied by servers that enable theVolumeSchedulingfeature. - 6
- The RHV storage domain name to use.
- 7
- If
true, the disk is thin provisioned. Iffalse, the disk is preallocated. Thin provisioning is recommended. - 8
- Optional: File system type to be created. Possible values:
ext4(default) orxfs.
6.18.4. Creating a persistent volume on RHV Copiar enlaceEnlace copiado en el portapapeles!
When you create a
PersistentVolumeClaim
PersistentVolume
Prerequisites
- You are logged in to a running OpenShift Container Platform cluster.
-
You provided the correct RHV credentials in secret.
ovirt-credentials - You have installed the oVirt CSI driver.
- You have defined at least one storage class.
Procedure
If you are using the web console to dynamically create a persistent volume on RHV:
- In the OpenShift Container Platform console, click Storage → Persistent Volume Claims.
- In the persistent volume claims overview, click Create Persistent Volume Claim.
- Define the required options on the resulting page.
-
Select the appropriate object, which is
StorageClassby default.ovirt-csi-sc - Enter a unique name for the storage claim.
- Select the access mode. Currently, RWO (ReadWriteOnce) is the only supported access mode.
- Define the size of the storage claim.
Select the Volume Mode:
: Mounted into pods as a directory. This mode is the default.Filesystem: Block device, without any file system on itBlock-
Click Create to create the object and generate a
PersistentVolumeClaimobject.PersistentVolume
If you are using the command-line interface (CLI) to dynamically create a RHV CSI volume:
Create and save a file with the
object described by the following sample YAML:PersistentVolumeClaimpvc-ovirt.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-ovirt spec: storageClassName: ovirt-csi-sc1 accessModes: - ReadWriteOnce resources: requests: storage: <volume size>2 volumeMode: <volume mode>3 Create the object you saved in the previous step by running the following command:
$ oc create -f pvc-ovirt.yamlTo verify that the volume was created and is ready, run the following command:
$ oc get pvc pvc-ovirtThe
shows that it is Bound.pvc-ovirt
If you need to update the Operator credentials, see the instructions in How to modify the RHV credentials in OCP 4.
6.19. VMware vSphere CSI Driver Operator Copiar enlaceEnlace copiado en el portapapeles!
6.19.1. Overview Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform can provision persistent volumes (PVs) using the Container Storage Interface (CSI) VMware vSphere driver for Virtual Machine Disk (VMDK) volumes.
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.
To create CSI-provisioned persistent volumes (PVs) that mount to vSphere storage assets, OpenShift Container Platform installs the vSphere CSI Driver Operator and the vSphere CSI driver by default in the
openshift-cluster-csi-drivers
-
vSphere CSI Driver Operator: The Operator provides a storage class, called , that you can use to create persistent volumes claims (PVCs). The vSphere CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage.
thin-csi - vSphere CSI driver: The driver enables you to create and mount vSphere PVs. In OpenShift Container Platform 4.12.21, and later, the driver version is 2.7.1. The version is 2.6.1 in OpenShift Container Platform 4.12 versions earlier than 4.12.21. The vSphere CSI driver supports all of the file systems supported by the underlying Red Hat Core OS release, including XFS and Ext4. For more information about supported file systems, see Overview of available file systems.
OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision vSphere storage.
In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform.
6.19.2. About CSI Copiar enlaceEnlace copiado en el portapapeles!
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
6.19.3. vSphere CSI limitations Copiar enlaceEnlace copiado en el portapapeles!
The following limitations apply to the vSphere Container Storage Interface (CSI) Driver Operator:
-
The vSphere CSI Driver supports dynamic and static provisioning. However, when using static provisioning in the PV specifications, do not use the key in
storage.kubernetes.io/csiProvisionerIdentitybecause this key indicates dynamically provisioned PVs.csi.volumeAttributes - Migrating persistent container volumes between datastores using the vSphere client interface is not supported with OpenShift Container Platform.
6.19.4. vSphere storage policy Copiar enlaceEnlace copiado en el portapapeles!
The vSphere CSI Driver Operator storage class uses vSphere’s storage policy. OpenShift Container Platform automatically creates a storage policy that targets datastore configured in cloud configuration:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: thin-csi
provisioner: csi.vsphere.vmware.com
parameters:
StoragePolicyName: "$openshift-storage-policy-xxxx"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: false
reclaimPolicy: Delete
6.19.5. ReadWriteMany vSphere volume support Copiar enlaceEnlace copiado en el portapapeles!
If the underlying vSphere environment supports the vSAN file service, then vSphere Container Storage Interface (CSI) Driver Operator installed by OpenShift Container Platform supports provisioning of ReadWriteMany (RWX) volumes. If vSAN file service is not configured, then ReadWriteOnce (RWO) is the only access mode available. If you do not have vSAN file service configured, and you request RWX, the volume fails to get created and an error is logged.
For more information about configuring the vSAN file service in your environment, see vSAN File Service.
You can request RWX volumes by making the following persistent volume claim (PVC):
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
resources:
requests:
storage: 1Gi
accessModes:
- ReadWriteMany
storageClassName: thin-csi
Requesting a PVC of the RWX volume type should result in provisioning of persistent volumes (PVs) backed by the vSAN file service.
6.19.6. VMware vSphere CSI Driver Operator requirements Copiar enlaceEnlace copiado en el portapapeles!
To install the vSphere CSI Driver Operator, the following requirements must be met:
- VMware vSphere version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later
- vCenter version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later
- Virtual machines of hardware version 15 or later
- No third-party vSphere CSI driver already installed in the cluster
If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party vSphere CSI driver prevents OpenShift Container Platform from updating to OpenShift Container Platform 4.13 or later.
The VMware vSphere CSI Driver Operator is supported only on clusters deployed with
platform: vsphere
To remove a third-party CSI driver, see Removing a third-party vSphere CSI Driver.
6.19.7. Removing a third-party vSphere CSI Driver Operator Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform 4.10, and later, includes a built-in version of the vSphere Container Storage Interface (CSI) Operator Driver that is supported by Red Hat. If you have installed a vSphere CSI driver provided by the community or another vendor, updates to the next major version of OpenShift Container Platform, such as 4.13, or later, might be disabled for your cluster.
OpenShift Container Platform 4.12, and later, clusters are still fully supported, and updates to z-stream releases of 4.12, such as 4.12.z, are not blocked, but you must correct this state by removing the third-party vSphere CSI Driver before updates to next major version of OpenShift Container Platform can occur. Removing the third-party vSphere CSI driver does not require deletion of associated persistent volume (PV) objects, and no data loss should occur.
These instructions may not be complete, so consult the vendor or community provider uninstall guide to ensure removal of the driver and components.
To uninstall the third-party vSphere CSI Driver:
- Delete the third-party vSphere CSI Driver (VMware vSphere Container Storage Plugin) Deployment and Daemonset objects.
- Delete the configmap and secret objects that were installed previously with the third-party vSphere CSI Driver.
Delete the third-party vSphere CSI driver
object:CSIDriver~ $ oc delete CSIDriver csi.vsphere.vmware.comcsidriver.storage.k8s.io "csi.vsphere.vmware.com" deleted
After you have removed the third-party vSphere CSI Driver from the OpenShift Container Platform cluster, installation of Red Hat’s vSphere CSI Driver Operator automatically resumes, and any conditions that could block upgrades to OpenShift Container Platform 4.11, or later, are automatically removed. If you had existing vSphere CSI PV objects, their lifecycle is now managed by Red Hat’s vSphere CSI Driver Operator.
6.19.8. Configuring vSphere CSI Topology Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform provides the ability to deploy OpenShift Container Platform for vSphere on different zones and regions, which allows you to deploy over multiple compute clusters, thus helping to avoid a single point of failure.
OpenShift Container Platform on vSphere does not support multiple Datacenters.
This is accomplished by defining zone and region categories in vCenter, and then assigning these categories to different failure domains, such as a compute cluster, by creating tags for these zone and region categories. After you have created the appropriate categories, and assigned tags to vCenter objects, you can create additional machinesets that create virtual machines (VMs) that are responsible for scheduling pods in those failure domains.
Procedure
In the VMware vCenter vSphere client GUI, define appropriate zone and region catagories and tags.
While vSphere allows you to create categories with any arbitrary name, OpenShift Container Platform strongly recommends use of
andopenshift-regionnames for defining topology.openshift-zoneThe following example defines two failure domains with one region and two zones:
Expand Table 6.4. vSphere topology with one region and two zones Compute cluster Failure domain Description Compute cluster: ocp1, Datacenter: Atlanta
openshift-region: us-east-1 (tag), openshift-zone: us-east-1a (tag)
This defines a failure domain in region us-east-1 with zone us-east-1a.
Computer cluster: ocp2, Datacenter: Atlanta
openshift-region: us-east-1 (tag), openshift-zone: us-east-1b (tag)
This defines a different failure domain within the same region called us-east-1b.
For more information about vSphere categories and tags, see the VMware vSphere documentation.
To allow the container storage interface (CSI) driver to detect this topology, edit the
object YAML fileclusterCSIDriversection:driverConfig-
Specify the and
openshift-zonecategories that you created earlier.openshift-region Set
todriverType.vSphere~ $ oc edit clustercsidriver csi.vsphere.vmware.com -o yamlExample output
apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: csi.vsphere.vmware.com spec: logLevel: Normal managementState: Managed observedConfig: null operatorLogLevel: Normal unsupportedConfigOverrides: null driverConfig: driverType: vSphere1 vSphere: topologyCategories:2 - openshift-zone - openshift-region
-
Specify the
Verify that
object has topology keys by running the following commands:CSINode~ $ oc get csinodeExample output
NAME DRIVERS AGE co8-4s88d-infra-2m5vd 1 27m co8-4s88d-master-0 1 70m co8-4s88d-master-1 1 70m co8-4s88d-master-2 1 70m co8-4s88d-worker-j2hmg 1 47m co8-4s88d-worker-mbb46 1 47m co8-4s88d-worker-zlk7d 1 47m~ $ oc get csinode co8-4s88d-worker-j2hmg -o yamlExample output
... spec: drivers: - allocatable: count: 59 name: csi-vsphere.vmware.com nodeID: co8-4s88d-worker-j2hmg topologyKeys:1 - topology.csi.vmware.com/openshift-zone - topology.csi.vmware.com/openshift-region- 1
- Topology keys from vSphere
openshift-zoneandopenshift-regioncatagories.
Noteobjects might take some time to receive updated topology information. After the driver is updated,CSINodeobjects should have topology keys in them.CSINodeCreate a tag to assign to datastores across failure domains:
When an OpenShift Container Platform spans more than one failure domain, the datastore might not be shared across those failure domains, which is where topology-aware provisioning of persistent volumes (PVs) is useful.
-
In vCenter, create a category for tagging the datastores. For example, . You can use any other category name, provided the category uniquely is used for tagging datastores participating in OpenShift Container Platform cluster. Also, ensure that
openshift-zonal-datastore-cat,StoragePod, andDatastoreare selected as Associable Entities for the created category.Folder -
In vCenter, create a tag that uses the previously created category. This example uses the tag name .
openshift-zonal-datastore Assign the previously created tag (in this example
) to each datastore in a failure domain that would be considered for dynamic provisioning.openshift-zonal-datastoreNoteYou can use any names you like for categories and tags. The names used in this example are provided as recommendations. Ensure that the tags and categories that you define uniquely identify only datastores that are shared with all hosts in the OpenShift Container Platform cluster.
-
In vCenter, create a category for tagging the datastores. For example,
Create a storage policy that targets the tag-based datastores in each failure domain:
- In vCenter, from the main menu, click Policies and Profiles.
- On the Policies and Profiles page, in the navigation pane, click VM Storage Policies.
- Click CREATE.
- Type a name for the storage policy.
For the rules, choose Tag Placement rules and select the tag and category that targets the desired datastores (in this example, the
tag).openshift-zonal-datastoreThe datastores are listed in the storage compatibility table.
Create a new storage class that uses the new zoned storage policy:
- Click Storage > StorageClasses.
- On the StorageClasses page, click Create StorageClass.
- Type a name for the new storage class in Name.
- Under Provisioner, select csi.vsphere.vmware.com.
- Under Additional parameters, for the StoragePolicyName parameter, set Value to the name of the new zoned storage policy that you created earlier.
Click Create.
Example output
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: zoned-sc1 provisioner: csi.vsphere.vmware.com parameters: StoragePolicyName: zoned-storage-policy2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumerNoteYou can also create the storage class by editing the preceding YAML file and running the command
.oc create -f $FILE
Results
Creating persistent volume claims (PVCs) and PVs from the topology aware storage class are truly zonal, and should use the datastore in their respective zone depending on how pods are scheduled:
~ $ oc get pv <pv-name> -o yaml
Example output
...
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.csi.vmware.com/openshift-zone
operator: In
values:
- <openshift-zone>
-key: topology.csi.vmware.com/openshift-region
operator: In
values:
- <openshift-region>
...
peristentVolumeclaimPolicy: Delete
storageClassName: <zoned-storage-class-name>
volumeMode: Filesystem
...
6.19.9. Additional resources Copiar enlaceEnlace copiado en el portapapeles!
Chapter 7. Generic ephemeral volumes Copiar enlaceEnlace copiado en el portapapeles!
7.1. Overview Copiar enlaceEnlace copiado en el portapapeles!
Generic ephemeral volumes are a type of ephemeral volume that can be provided by all storage drivers that support persistent volumes and dynamic provisioning. Generic ephemeral volumes are similar to
emptyDir
Generic ephemeral volumes are specified inline in the pod spec and follow the pod’s lifecycle. They are created and deleted along with the pod.
Generic ephemeral volumes have the following features:
- Storage can be local or network-attached.
- Volumes can have a fixed size that pods are not able to exceed.
- Volumes might have some initial data, depending on the driver and parameters.
- Typical operations on volumes are supported, assuming that the driver supports them, including snapshotting, cloning, resizing, and storage capacity tracking.
Generic ephemeral volumes do not support offline snapshots and resize.
Due to this limitation, the following Container Storage Interface (CSI) drivers do not support the following features for generic ephemeral volumes:
- Azure Disk CSI driver does not support resize.
- Cinder CSI driver does not support snapshot.
7.2. Lifecycle and persistent volume claims Copiar enlaceEnlace copiado en el portapapeles!
The parameters for a volume claim are allowed inside a volume source of a pod. Labels, annotations, and the whole set of fields for persistent volume claims (PVCs) are supported. When such a pod is created, the ephemeral volume controller then creates an actual PVC object (from the template shown in the Creating generic ephemeral volumes procedure) in the same namespace as the pod, and ensures that the PVC is deleted when the pod is deleted. This triggers volume binding and provisioning in one of two ways:
Either immediately, if the storage class uses immediate volume binding.
With immediate binding, the scheduler is forced to select a node that has access to the volume after it is available.
When the pod is tentatively scheduled onto a node (
binding mode).WaitForFirstConsumervolumeThis volume binding option is recommended for generic ephemeral volumes because then the scheduler can choose a suitable node for the pod.
In terms of resource ownership, a pod that has generic ephemeral storage is the owner of the PVCs that provide that ephemeral storage. When the pod is deleted, the Kubernetes garbage collector deletes the PVC, which then usually triggers deletion of the volume because the default reclaim policy of storage classes is to delete volumes. You can create quasi-ephemeral local storage by using a storage class with a reclaim policy of retain: the storage outlives the pod, and in this case, you must ensure that volume clean-up happens separately. While these PVCs exist, they can be used like any other PVC. In particular, they can be referenced as data source in volume cloning or snapshotting. The PVC object also holds the current status of the volume.
7.3. Security Copiar enlaceEnlace copiado en el portapapeles!
You can enable the generic ephemeral volume feature to allows users who can create pods to also create persistent volume claims (PVCs) indirectly. This feature works even if these users do not have permission to create PVCs directly. Cluster administrators must be aware of this. If this does not fit your security model, use an admission webhook that rejects objects such as pods that have a generic ephemeral volume.
The normal namespace quota for PVCs still applies, so even if users are allowed to use this new mechanism, they cannot use it to circumvent other policies.
7.4. Persistent volume claim naming Copiar enlaceEnlace copiado en el portapapeles!
Automatically created persistent volume claims (PVCs) are named by a combination of the pod name and the volume name, with a hyphen (-) in the middle. This naming convention also introduces a potential conflict between different pods, and between pods and manually created PVCs.
For example,
pod-a
scratch
pod
a-scratch
pod-a-scratch
Such conflicts are detected, and a PVC is only used for an ephemeral volume if it was created for the pod. This check is based on the ownership relationship. An existing PVC is not overwritten or modified, but this does not resolve the conflict. Without the right PVC, a pod cannot start.
Be careful when naming pods and volumes inside the same namespace so that naming conflicts do not occur.
7.5. Creating generic ephemeral volumes Copiar enlaceEnlace copiado en el portapapeles!
Procedure
-
Create the object definition and save it to a file.
pod Include the generic ephemeral volume information in the file.
my-example-pod-with-generic-vols.yaml
kind: Pod apiVersion: v1 metadata: name: my-app spec: containers: - name: my-frontend image: busybox:1.28 volumeMounts: - mountPath: "/mnt/storage" name: data command: [ "sleep", "1000000" ] volumes: - name: data1 ephemeral: volumeClaimTemplate: metadata: labels: type: my-app-ephvol spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "gp2-csi" resources: requests: storage: 1Gi- 1
- Generic ephemeral volume claim
Chapter 8. Expanding persistent volumes Copiar enlaceEnlace copiado en el portapapeles!
8.1. Enabling volume expansion support Copiar enlaceEnlace copiado en el portapapeles!
Before you can expand persistent volumes, the
StorageClass
allowVolumeExpansion
true
Procedure
Edit the
object and add theStorageClassattribute by running the following command:allowVolumeExpansion$ oc edit storageclass <storage_class_name>1 - 1
- Specifies the name of the storage class.
The following example demonstrates adding this line at the bottom of the storage class configuration.
apiVersion: storage.k8s.io/v1 kind: StorageClass ... parameters: type: gp2 reclaimPolicy: Delete allowVolumeExpansion: true1 - 1
- Setting this attribute to
trueallows PVCs to be expanded after creation.
8.2. Expanding CSI volumes Copiar enlaceEnlace copiado en el portapapeles!
You can use the Container Storage Interface (CSI) to expand storage volumes after they have already been created.
CSI volume expansion does not support the following:
- Recovering from failure when expanding volumes
- Shrinking
Prerequisites
- The underlying CSI driver supports resize.
- Dynamic provisioning is used.
-
The controlling object has
StorageClassset toallowVolumeExpansion. For more information, see "Enabling volume expansion support."true
Procedure
-
For the persistent volume claim (PVC), set to the desired new size.
.spec.resources.requests.storage -
Watch the field of the PVC to see if the resize has completed. OpenShift Container Platform adds the
status.conditionscondition to the PVC during expansion, which is removed after expansion completes.Resizing
8.3. Expanding FlexVolume with a supported driver Copiar enlaceEnlace copiado en el portapapeles!
When using FlexVolume to connect to your back-end storage system, you can expand persistent storage volumes after they have already been created. This is done by manually updating the persistent volume claim (PVC) in OpenShift Container Platform.
FlexVolume allows expansion if the driver is set with
RequiresFSResize
true
Similar to other volume types, FlexVolume volumes can also be expanded when in use by a pod.
Prerequisites
- The underlying volume driver supports resize.
-
The driver is set with the capability to
RequiresFSResize.true - Dynamic provisioning is used.
-
The controlling object has
StorageClassset toallowVolumeExpansion.true
Procedure
To use resizing in the FlexVolume plugin, you must implement the
interface using these methods:ExpandableVolumePluginRequiresFSResize-
If
true, updates the capacity directly. Iffalse, calls theExpandFSmethod to finish the filesystem resize. ExpandFS-
If
true, callsExpandFSto resize filesystem after physical volume expansion is done. The volume driver can also perform physical volume resize together with filesystem resize.
Because OpenShift Container Platform does not support installation of FlexVolume plugins on control plane nodes, it does not support control-plane expansion of FlexVolume.
8.4. Expanding local volumes Copiar enlaceEnlace copiado en el portapapeles!
You can manually expand persistent volumes (PVs) and persistent volume claims (PVCs) created by using the local storage operator (LSO).
Procedure
- Expand the underlying devices. Ensure that appropriate capacity is available on these devices.
-
Update the corresponding PV objects to match the new device sizes by editing the field of the PV.
.spec.capacity -
For the storage class that is used for binding the PVC to PVet, set .
allowVolumeExpansion:true -
For the PVC, set to match the new size.
.spec.resources.requests.storage
Kubelet should automatically expand the underlying file system on the volume, if necessary, and update the status field of the PVC to reflect the new size.
8.5. Expanding persistent volume claims (PVCs) with a file system Copiar enlaceEnlace copiado en el portapapeles!
Expanding PVCs based on volume types that need file system resizing, such as GCE, EBS, and Cinder, is a two-step process. First, expand the volume objects in the cloud provider. Second, expand the file system on the node.
Expanding the file system on the node only happens when a new pod is started with the volume.
Prerequisites
-
The controlling object must have
StorageClassset toallowVolumeExpansion.true
Procedure
Edit the PVC and request a new size by editing
. For example, the following expands thespec.resources.requestsPVC to 8 Gi:ebskind: PersistentVolumeClaim apiVersion: v1 metadata: name: ebs spec: storageClass: "storageClassWithFlagSet" accessModes: - ReadWriteOnce resources: requests: storage: 8Gi1 - 1
- Updating
spec.resources.requeststo a larger amount expands the PVC.
After the cloud provider object has finished resizing, the PVC is set to
. Check the condition by entering the following command:FileSystemResizePending$ oc describe pvc <pvc_name>-
When the cloud provider object has finished resizing, the object reflects the newly requested size in
PersistentVolume. At this point, you can create or recreate a new pod from the PVC to finish the file system resizing. Once the pod is running, the newly requested size is available and thePersistentVolume.Spec.Capacitycondition is removed from the PVC.FileSystemResizePending
8.6. Recovering from failure when expanding volumes Copiar enlaceEnlace copiado en el portapapeles!
If expanding underlying storage fails, the OpenShift Container Platform administrator can manually recover the persistent volume claim (PVC) state and cancel the resize requests. Otherwise, the resize requests are continuously retried by the controller.
Procedure
-
Mark the persistent volume (PV) that is bound to the PVC with the reclaim policy. This can be done by editing the PV and changing
RetaintopersistentVolumeReclaimPolicy.Retain - Delete the PVC.
-
Manually edit the PV and delete the entry from the PV specs to ensure that the newly created PVC can bind to the PV marked
claimRef. This marks the PV asRetain.Available - Re-create the PVC in a smaller size, or a size that can be allocated by the underlying storage provider.
-
Set the field of the PVC to the name of the PV. This binds the PVC to the provisioned PV only.
volumeName - Restore the reclaim policy on the PV.
Chapter 9. Dynamic provisioning Copiar enlaceEnlace copiado en el portapapeles!
9.1. About dynamic provisioning Copiar enlaceEnlace copiado en el portapapeles!
The
StorageClass
StorageClass
cluster-admin
storage-admin
StorageClass
The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure.
Many storage types are available for use as persistent volumes in OpenShift Container Platform. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plugin APIs.
9.2. Available dynamic provisioning plugins Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform provides the following provisioner plugins, which have generic implementations for dynamic provisioning that use the cluster’s configured provider’s API to create new storage resources:
| Storage type | Provisioner plugin name | Notes |
|---|---|---|
| Red Hat OpenStack Platform (RHOSP) Cinder |
| |
| RHOSP Manila Container Storage Interface (CSI) |
| Once installed, the OpenStack Manila CSI Driver Operator and ManilaDriver automatically create the required storage classes for all available Manila share types needed for dynamic provisioning. |
| AWS Elastic Block Store (EBS) |
| For dynamic provisioning when using multiple clusters in different zones, tag each node with
|
| Azure Disk |
| |
| Azure File |
| The
|
| GCE Persistent Disk (gcePD) |
| In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. |
|
|
Any chosen provisioner plugin also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation.
9.3. Defining a storage class Copiar enlaceEnlace copiado en el portapapeles!
StorageClass
cluster-admin
storage-admin
The Cluster Storage Operator might install a default storage class depending on the platform in use. This storage class is owned and controlled by the Operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class.
The following sections describe the basic definition for a
StorageClass
9.3.1. Basic StorageClass object definition Copiar enlaceEnlace copiado en el portapapeles!
The following resource shows the parameters and default values that you use to configure a storage class. This example uses the AWS ElasticBlockStore (EBS) object definition.
Sample StorageClass definition
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: <storage-class-name>
annotations:
storageclass.kubernetes.io/is-default-class: 'true'
...
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp3
...
- 1
- (required) The API object type.
- 2
- (required) The current apiVersion.
- 3
- (required) The name of the storage class.
- 4
- (optional) Annotations for the storage class.
- 5
- (required) The type of provisioner associated with this storage class.
- 6
- (optional) The parameters required for the specific provisioner, this will change from plug-in to plug-in.
9.3.2. Storage class annotations Copiar enlaceEnlace copiado en el portapapeles!
To set a storage class as the cluster-wide default, add the following annotation to your storage class metadata:
storageclass.kubernetes.io/is-default-class: "true"
For example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
...
This enables any persistent volume claim (PVC) that does not specify a specific storage class to automatically be provisioned through the default storage class. However, your cluster can have more than one storage class, but only one of them can be the default storage class.
The beta annotation
storageclass.beta.kubernetes.io/is-default-class
To set a storage class description, add the following annotation to your storage class metadata:
kubernetes.io/description: My Storage Class Description
For example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubernetes.io/description: My Storage Class Description
...
9.3.3. RHOSP Cinder object definition Copiar enlaceEnlace copiado en el portapapeles!
cinder-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: <storage-class-name>
provisioner: kubernetes.io/cinder
parameters:
type: fast
availability: nova
fsType: ext4
- 1
- Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes.
- 2
- Volume type created in Cinder. Default is empty.
- 3
- Availability Zone. If not specified, volumes are generally round-robined across all active zones where the OpenShift Container Platform cluster has a node.
- 4
- File system that is created on dynamically provisioned volumes. This value is copied to the
fsTypefield of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value isext4.
9.3.4. RHOSP Manila Container Storage Interface (CSI) object definition Copiar enlaceEnlace copiado en el portapapeles!
Once installed, the OpenStack Manila CSI Driver Operator and ManilaDriver automatically create the required storage classes for all available Manila share types needed for dynamic provisioning.
9.3.5. AWS Elastic Block Store (EBS) object definition Copiar enlaceEnlace copiado en el portapapeles!
aws-ebs-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: <storage-class-name>
provisioner: kubernetes.io/aws-ebs
parameters:
type: io1
iopsPerGB: "10"
encrypted: "true"
kmsKeyId: keyvalue
fsType: ext4
- 1
- (required) Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes.
- 2
- (required) Select from
io1,gp3,sc1,st1. The default isgp3. See the AWS documentation for valid Amazon Resource Name (ARN) values. - 3
- Optional: Only for io1 volumes. I/O operations per second per GiB. The AWS volume plugin multiplies this with the size of the requested volume to compute IOPS of the volume. The value cap is 20,000 IOPS, which is the maximum supported by AWS. See the AWS documentation for further details.
- 4
- Optional: Denotes whether to encrypt the EBS volume. Valid values are
trueorfalse. - 5
- Optional: The full ARN of the key to use when encrypting the volume. If none is supplied, but
encyptedis set totrue, then AWS generates a key. See the AWS documentation for a valid ARN value. - 6
- Optional: File system that is created on dynamically provisioned volumes. This value is copied to the
fsTypefield of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value isext4.
9.3.6. Azure Disk object definition Copiar enlaceEnlace copiado en el portapapeles!
azure-advanced-disk-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: <storage-class-name>
provisioner: kubernetes.io/azure-disk
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
kind: Managed
storageaccounttype: Premium_LRS
reclaimPolicy: Delete
- 1
- Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes.
- 2
- Using
WaitForFirstConsumeris strongly recommended. This provisions the volume while allowing enough storage to schedule the pod on a free worker node from an available zone. - 3
- Possible values are
Shared(default),Managed, andDedicated.ImportantRed Hat only supports the use of
in the storage class.kind: ManagedWith
andShared, Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created withDedicatedorSharedcannot be attached to OpenShift Container Platform nodes.Dedicated - 4
- Azure storage account SKU tier. Default is empty. Note that Premium VMs can attach both
Standard_LRSandPremium_LRSdisks, Standard VMs can only attachStandard_LRSdisks, Managed VMs can only attach managed disks, and unmanaged VMs can only attach unmanaged disks.-
If is set to
kind, Azure creates all unmanaged disks in a few shared storage accounts in the same resource group as the cluster.Shared -
If is set to
kind, Azure creates new managed disks.Managed If
is set tokindand aDedicatedis specified, Azure uses the specified storage account for the new unmanaged disk in the same resource group as the cluster. For this to work:storageAccount- The specified storage account must be in the same region.
- Azure Cloud Provider must have write access to the storage account.
-
If is set to
kindand aDedicatedis not specified, Azure creates a new dedicated storage account for the new unmanaged disk in the same resource group as the cluster.storageAccount
-
If
9.3.7. Azure File object definition Copiar enlaceEnlace copiado en el portapapeles!
The Azure File storage class uses secrets to store the Azure storage account name and the storage account key that are required to create an Azure Files share. These permissions are created as part of the following procedure.
Procedure
Define a
object that allows access to create and view secrets:ClusterRoleapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: # name: system:azure-cloud-provider name: <persistent-volume-binder-role>1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create']- 1
- The name of the cluster role to view and create secrets.
Add the cluster role to the service account:
$ oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binderCreate the Azure File
object:StorageClasskind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file>1 provisioner: kubernetes.io/azure-file parameters: location: eastus2 skuName: Standard_LRS3 storageAccount: <storage-account>4 reclaimPolicy: Delete volumeBindingMode: Immediate- 1
- Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes.
- 2
- Location of the Azure storage account, such as
eastus. Default is empty, meaning that a new Azure storage account will be created in the OpenShift Container Platform cluster’s location. - 3
- SKU tier of the Azure storage account, such as
Standard_LRS. Default is empty, meaning that a new Azure storage account will be created with theStandard_LRSSKU. - 4
- Name of the Azure storage account. If a storage account is provided, then
skuNameandlocationare ignored. If no storage account is provided, then the storage class searches for any storage account that is associated with the resource group for any accounts that match the definedskuNameandlocation.
9.3.7.1. Considerations when using Azure File Copiar enlaceEnlace copiado en el portapapeles!
The following file system features are not supported by the default Azure File storage class:
- Symlinks
- Hard links
- Extended attributes
- Sparse files
- Named pipes
Additionally, the owner user identifier (UID) of the Azure File mounted directory is different from the process UID of the container. The
uid
StorageClass
The following
StorageClass
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azure-file
mountOptions:
- uid=1500
- gid=1500
- mfsymlinks
provisioner: kubernetes.io/azure-file
parameters:
location: eastus
skuName: Standard_LRS
reclaimPolicy: Delete
volumeBindingMode: Immediate
9.3.8. GCE PersistentDisk (gcePD) object definition Copiar enlaceEnlace copiado en el portapapeles!
gce-pd-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: <storage-class-name>
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Delete
9.3.9. VMware vSphere object definition Copiar enlaceEnlace copiado en el portapapeles!
vsphere-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: <storage-class-name>
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: thin
- 1
- Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes.
- 2
- For more information about using VMware vSphere with OpenShift Container Platform, see the VMware vSphere documentation.
- 3
diskformat:thin,zeroedthickandeagerzeroedthickare all valid disk formats. See vSphere docs for additional details regarding the disk format types. The default value isthin.
9.4. Changing the default storage class Copiar enlaceEnlace copiado en el portapapeles!
Use this procedure to change the default storage class. For example, you have two defined storage classes,
gp3
standard
gp3
standard
Procedure
List the storage classes:
$ oc get storageclassExample output
NAME TYPE gp3 (default) kubernetes.io/aws-ebs1 standard kubernetes.io/aws-ebs- 1
(default)denotes the default storage class.
Change the value of the
annotation tostorageclass.kubernetes.io/is-default-classfor the default storage class:false$ oc patch storageclass gp3 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'Make another storage class the default by setting the
annotation tostorageclass.kubernetes.io/is-default-class:true$ oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'Verify the changes:
$ oc get storageclassExample output
NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs
Legal Notice
Copiar enlaceEnlace copiado en el portapapeles!
Copyright © Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of the OpenJS Foundation.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.