Chapter 4. Configuring persistent storage
4.1. Persistent storage using AWS Elastic Block Store
OpenShift Container Platform supports AWS Elastic Block Store volumes (EBS). You can provision your OpenShift Container Platform cluster with persistent storage by using Amazon EC2.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can dynamically provision AWS EBS volumes. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. You can define a KMS key to encrypt container-persistent volumes on AWS.
OpenShift Container Platform defaults to using an in-tree, or non-Container Storage Interface (CSI) plug-in to provision AWS EBS storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plug-ins are planned for migration to their equivalent CSI driver.
CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform.
High-availability of storage in the infrastructure is left to the underlying storage provider.
OpenShift Container Platform 4.12 and later provides automatic migration for the AWS Block in-tree volume plugin to its equivalent CSI driver.
CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
4.1.1. Creating the EBS storage class
Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.
4.1.2. Creating the persistent volume claim
Prerequisites
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Procedure
-
In the OpenShift Container Platform console, click Storage
Persistent Volume Claims. - In the persistent volume claims overview, click Create Persistent Volume Claim.
Define the desired options on the page that appears.
- Select the previously-created storage class from the drop-down menu.
- Enter a unique name for the storage claim.
- Select the access mode. This selection determines the read and write access for the storage claim.
- Define the size of the storage claim.
- Click Create to create the persistent volume claim and generate a persistent volume.
4.1.3. Volume format
Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the fsType
parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system.
This verification enables you to use unformatted AWS volumes as persistent volumes, because OpenShift Container Platform formats them before the first use.
4.1.4. Maximum number of EBS volumes on a node
By default, OpenShift Container Platform supports a maximum of 39 EBS volumes attached to one node. This limit is consistent with the AWS volume limits. The volume limit depends on the instance type.
As a cluster administrator, you must use either in-tree or Container Storage Interface (CSI) volumes and their respective storage classes, but never both volume types at the same time. The maximum attached EBS volume number is counted separately for in-tree and CSI volumes, which means you could have up to 39 EBS volumes of each type.
For information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins, see AWS Elastic Block Store CSI Driver Operator.
4.1.5. Encrypting container persistent volumes on AWS with a KMS key
Defining a KMS key to encrypt container-persistent volumes on AWS is useful when you have explicit compliance and security guidelines when deploying to AWS.
Prerequisites
- Underlying infrastructure must contain storage.
- You must create a customer KMS key on AWS.
Procedure
Create a storage class:
$ cat << EOF | oc create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 parameters: fsType: ext4 2 encrypted: "true" kmsKeyId: keyvalue 3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer EOF
- 1
- Specifies the name of the storage class.
- 2
- File system that is created on provisioned volumes.
- 3
- Specifies the full Amazon Resource Name (ARN) of the key to use when encrypting the container-persistent volume. If you do not provide any key, but the
encrypted
field is set totrue
, then the default KMS key is used. See Finding the key ID and key ARN on AWS in the AWS documentation.
Create a persistent volume claim (PVC) with the storage class specifying the KMS key:
$ cat << EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: <storage-class-name> resources: requests: storage: 1Gi EOF
Create workload containers to consume the PVC:
$ cat << EOF | oc create -f - kind: Pod metadata: name: mypod spec: containers: - name: httpd image: quay.io/centos7/httpd-24-centos7 ports: - containerPort: 80 volumeMounts: - mountPath: /mnt/storage name: data volumes: - name: data persistentVolumeClaim: claimName: mypvc EOF
4.1.6. Additional resources
- See AWS Elastic Block Store CSI Driver Operator for information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
4.2. Persistent storage using Azure
OpenShift Container Platform supports Microsoft Azure Disk volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Azure Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.
OpenShift Container Platform 4.11 and later provides automatic migration for the Azure Disk in-tree volume plugin to its equivalent CSI driver.
CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
High availability of storage in the infrastructure is left to the underlying storage provider.
Additional resources
4.2.1. Creating the Azure storage class
Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.
Procedure
-
In the OpenShift Container Platform console, click Storage
Storage Classes. - In the storage class overview, click Create Storage Class.
Define the desired options on the page that appears.
- Enter a name to reference the storage class.
- Enter an optional description.
- Select the reclaim policy.
Select
kubernetes.io/azure-disk
from the drop down list.-
Enter the storage account type. This corresponds to your Azure storage account SKU tier. Valid options are
Premium_LRS
,Standard_LRS
,StandardSSD_LRS
, andUltraSSD_LRS
. Enter the kind of account. Valid options are
shared
,dedicated,
andmanaged
.ImportantRed Hat only supports the use of
kind: Managed
in the storage class.With
Shared
andDedicated
, Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created withShared
orDedicated
cannot be attached to OpenShift Container Platform nodes.
-
Enter the storage account type. This corresponds to your Azure storage account SKU tier. Valid options are
- Enter additional parameters for the storage class as desired.
- Click Create to create the storage class.
Additional resources
4.2.2. Creating the persistent volume claim
Prerequisites
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Procedure
-
In the OpenShift Container Platform console, click Storage
Persistent Volume Claims. - In the persistent volume claims overview, click Create Persistent Volume Claim.
Define the desired options on the page that appears.
- Select the previously-created storage class from the drop-down menu.
- Enter a unique name for the storage claim.
- Select the access mode. This selection determines the read and write access for the storage claim.
- Define the size of the storage claim.
- Click Create to create the persistent volume claim and generate a persistent volume.
4.2.3. Volume format
Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType
parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system.
This allows using unformatted Azure volumes as persistent volumes, because OpenShift Container Platform formats them before the first use.
4.2.4. Machine sets that deploy machines with ultra disks using PVCs
You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads.
Both the in-tree plugin and CSI driver support using PVCs to enable ultra disks. You can also deploy machines with ultra disks as data disks without creating a PVC.
Additional resources
4.2.4.1. Creating machines with ultra disks by using machine sets
You can deploy machines with ultra disks on Azure by editing your machine set YAML file.
Prerequisites
- Have an existing Microsoft Azure cluster.
Procedure
Copy an existing Azure
MachineSet
custom resource (CR) and edit it by running the following command:$ oc edit machineset <machine-set-name>
where
<machine-set-name>
is the machine set that you want to provision machines with ultra disks.Add the following lines in the positions indicated:
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2
Create a machine set using the updated configuration by running the following command:
$ oc create -f <machine-set-name>.yaml
Create a storage class that contains the following YAML definition:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc 1 parameters: cachingMode: None diskIopsReadWrite: "2000" 2 diskMbpsReadWrite: "320" 3 kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com 4 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer 5
- 1
- Specify the name of the storage class. This procedure uses
ultra-disk-sc
for this value. - 2
- Specify the number of IOPS for the storage class.
- 3
- Specify the throughput in MBps for the storage class.
- 4
- For Azure Kubernetes Service (AKS) version 1.21 or later, use
disk.csi.azure.com
. For earlier versions of AKS, usekubernetes.io/azure-disk
. - 5
- Optional: Specify this parameter to wait for the creation of the pod that will use the disk.
Create a persistent volume claim (PVC) to reference the
ultra-disk-sc
storage class that contains the following YAML definition:apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk 1 spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc 2 resources: requests: storage: 4Gi 3
Create a pod that contains the following YAML definition:
apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd 1 containers: - name: nginx-ultra image: alpine:latest command: - "sleep" - "infinity" volumeMounts: - mountPath: "/mnt/azure" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk 2
Verification
Validate that the machines are created by running the following command:
$ oc get machines
The machines should be in the
Running
state.For a machine that is running and has a node attached, validate the partition by running the following command:
$ oc debug node/<node-name> -- chroot /host lsblk
In this command,
oc debug node/<node-name>
starts a debugging shell on the node<node-name>
and passes a command with--
. The passed commandchroot /host
provides access to the underlying host OS binaries, andlsblk
shows the block devices that are attached to the host OS machine.
Next steps
To use an ultra disk from within a pod, create a workload that uses the mount point. Create a YAML file similar to the following example:
apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - name: lun0p1 mountPath: "/tmp" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd
4.2.4.2. Troubleshooting resources for machine sets that enable ultra disks
Use the information in this section to understand and recover from issues you might encounter.
4.2.4.2.1. Unable to mount a persistent volume claim backed by an ultra disk
If there is an issue mounting a persistent volume claim backed by an ultra disk, the pod becomes stuck in the ContainerCreating
state and an alert is triggered.
For example, if the additionalCapabilities.ultraSSDEnabled
parameter is not set on the machine that backs the node that hosts the pod, the following error message appears:
StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.
To resolve this issue, describe the pod by running the following command:
$ oc -n <stuck_pod_namespace> describe pod <stuck_pod_name>
4.3. Persistent storage using Azure File
OpenShift Container Platform supports Microsoft Azure File volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can provision Azure File volumes dynamically.
Persistent volumes are not bound to a single project or namespace, and you can share them across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace, and can be requested by users for use in applications.
High availability of storage in the infrastructure is left to the underlying storage provider.
Azure File volumes use Server Message Block.
In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform.
Additional resources
4.3.1. Create the Azure File share persistent volume claim
To create the persistent volume claim, you must first define a Secret
object that contains the Azure account and key. This secret is used in the PersistentVolume
definition, and will be referenced by the persistent volume claim for use in applications.
Prerequisites
- An Azure File share exists.
- The credentials to access this share, specifically the storage account and key, are available.
Procedure
Create a
Secret
object that contains the Azure File credentials:$ oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \ 1 --from-literal=azurestorageaccountkey=<storage-account-key> 2
Create a
PersistentVolume
object that references theSecret
object you created:apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5Gi" 2 accessModes: - "ReadWriteOnce" storageClassName: azure-file-sc azureFile: secretName: <secret-name> 3 shareName: share-1 4 readOnly: false
Create a
PersistentVolumeClaim
object that maps to the persistent volume you created:apiVersion: "v1" kind: "PersistentVolumeClaim" metadata: name: "claim1" 1 spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "5Gi" 2 storageClassName: azure-file-sc 3 volumeName: "pv0001" 4
- 1
- The name of the persistent volume claim.
- 2
- The size of this persistent volume claim.
- 3
- The name of the storage class that is used to provision the persistent volume. Specify the storage class used in the
PersistentVolume
definition. - 4
- The name of the existing
PersistentVolume
object that references the Azure File share.
4.3.2. Mount the Azure File share in a pod
After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod.
Prerequisites
- A persistent volume claim exists that is mapped to the underlying Azure File share.
Procedure
Create a pod that mounts the existing persistent volume claim:
apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: ... volumeMounts: - mountPath: "/data" 2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim1 3
- 1
- The name of the pod.
- 2
- The path to mount the Azure File share inside the pod. Do not mount to the container root,
/
, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host/dev/pts
files. It is safe to mount the host by using/host
. - 3
- The name of the
PersistentVolumeClaim
object that has been previously created.
4.4. Persistent storage using Cinder
OpenShift Container Platform supports OpenStack Cinder. Some familiarity with Kubernetes and OpenStack is assumed.
Cinder volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.
OpenShift Container Platform 4.11 and later provides automatic migration for the Cinder in-tree volume plugin to its equivalent CSI driver.
CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
Additional resources
- For more information about how OpenStack Block Storage provides persistent block storage management for virtual hard drives, see OpenStack Cinder.
4.4.1. Manual provisioning with Cinder
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Prerequisites
- OpenShift Container Platform configured for Red Hat OpenStack Platform (RHOSP)
- Cinder volume ID
4.4.1.1. Creating the persistent volume
You must define your persistent volume (PV) in an object definition before creating it in OpenShift Container Platform:
Procedure
Save your object definition to a file.
cinder-persistentvolume.yaml
apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5Gi" 2 accessModes: - "ReadWriteOnce" cinder: 3 fsType: "ext3" 4 volumeID: "f37a03aa-6212-4c62-a805-9ce139fab180" 5
- 1
- The name of the volume that is used by persistent volume claims or pods.
- 2
- The amount of storage allocated to this volume.
- 3
- Indicates
cinder
for Red Hat OpenStack Platform (RHOSP) Cinder volumes. - 4
- The file system that is created when the volume is mounted for the first time.
- 5
- The Cinder volume to use.
ImportantDo not change the
fstype
parameter value after the volume is formatted and provisioned. Changing this value can result in data loss and pod failure.Create the object definition file you saved in the previous step.
$ oc create -f cinder-persistentvolume.yaml
4.4.1.2. Persistent volume formatting
You can use unformatted Cinder volumes as PVs because OpenShift Container Platform formats them before the first use.
Before OpenShift Container Platform mounts the volume and passes it to a container, the system checks that it contains a file system as specified by the fsType
parameter in the PV definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system.
4.4.1.3. Cinder volume security
If you use Cinder PVs in your application, configure security for their deployment configurations.
Prerequisites
-
An SCC must be created that uses the appropriate
fsGroup
strategy.
Procedure
Create a service account and add it to the SCC:
$ oc create serviceaccount <service_account>
$ oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project>
In your application’s deployment configuration, provide the service account name and
securityContext
:apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account> 6 securityContext: fsGroup: 7777 7
- 1
- The number of copies of the pod to run.
- 2
- The label selector of the pod to run.
- 3
- A template for the pod that the controller creates.
- 4
- The labels on the pod. They must include labels from the label selector.
- 5
- The maximum name length after expanding any parameters is 63 characters.
- 6
- Specifies the service account you created.
- 7
- Specifies an
fsGroup
for the pods.
4.5. Persistent storage using Fibre Channel
OpenShift Container Platform supports Fibre Channel, allowing you to provision your OpenShift Container Platform cluster with persistent storage using Fibre channel volumes. Some familiarity with Kubernetes and Fibre Channel is assumed.
Persistent storage using Fibre Channel is not supported on ARM architecture based infrastructures.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.
High availability of storage in the infrastructure is left to the underlying storage provider.
Additional resources
4.5.1. Provisioning
To provision Fibre Channel volumes using the PersistentVolume
API the following must be available:
-
The
targetWWNs
(array of Fibre Channel target’s World Wide Names). - A valid LUN number.
- The filesystem type.
A persistent volume and a LUN have a one-to-one mapping between them.
Prerequisites
- Fibre Channel LUNs must exist in the underlying infrastructure.
PersistentVolume
object definition
apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce fc: wwids: [scsi-3600508b400105e210000900000490000] 1 targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2 lun: 2 3 fsType: ext4
- 1
- World wide identifiers (WWIDs). Either FC
wwids
or a combination of FCtargetWWNs
andlun
must be set, but not both simultaneously. The FC WWID identifier is recommended over the WWNs target because it is guaranteed to be unique for every storage device, and independent of the path that is used to access the device. The WWID identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital Product Data (page 0x83
) or Unit Serial Number (page 0x80
). FC WWIDs are identified as/dev/disk/by-id/
to reference the data on the disk, even if the path to the device changes and even when accessing the device from different systems. - 2 3
- Fibre Channel WWNs are identified as
/dev/disk/by-path/pci-<IDENTIFIER>-fc-0x<WWN>-lun-<LUN#>
, but you do not need to provide any part of the path leading up to theWWN
, including the0x
, and anything after, including the-
(hyphen).
Changing the value of the fstype
parameter after the volume has been formatted and provisioned can result in data loss and pod failure.
4.5.1.1. Enforcing disk quotas
Use LUN partitions to enforce disk quotas and size constraints. Each LUN is mapped to a single persistent volume, and unique names must be used for persistent volumes.
Enforcing quotas in this way allows the end user to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity.
4.5.1.2. Fibre Channel volume security
Users request storage with a persistent volume claim. This claim only lives in the user’s namespace, and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume across a namespace causes the pod to fail.
Each Fibre Channel LUN must be accessible by all nodes in the cluster.
4.6. Persistent storage using FlexVolume
FlexVolume is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
Out-of-tree Container Storage Interface (CSI) driver is the recommended way to write volume drivers in OpenShift Container Platform. Maintainers of FlexVolume drivers should implement a CSI driver and move users of FlexVolume to CSI. Users of FlexVolume should move their workloads to CSI driver.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
OpenShift Container Platform supports FlexVolume, an out-of-tree plugin that uses an executable model to interface with drivers.
To use storage from a back-end that does not have a built-in plugin, you can extend OpenShift Container Platform through FlexVolume drivers and provide persistent storage to applications.
Pods interact with FlexVolume drivers through the flexvolume
in-tree plugin.
Additional resources
4.6.1. About FlexVolume drivers
A FlexVolume driver is an executable file that resides in a well-defined directory on all nodes in the cluster. OpenShift Container Platform calls the FlexVolume driver whenever it needs to mount or unmount a volume represented by a PersistentVolume
object with flexVolume
as the source.
Attach and detach operations are not supported in OpenShift Container Platform for FlexVolume.
4.6.2. FlexVolume driver example
The first command-line argument of the FlexVolume driver is always an operation name. Other parameters are specific to each operation. Most of the operations take a JavaScript Object Notation (JSON) string as a parameter. This parameter is a complete JSON string, and not the name of a file with the JSON data.
The FlexVolume driver contains:
-
All
flexVolume.options
. -
Some options from
flexVolume
prefixed bykubernetes.io/
, such asfsType
andreadwrite
. -
The content of the referenced secret, if specified, prefixed by
kubernetes.io/secret/
.
FlexVolume driver JSON input example
{ "fooServer": "192.168.0.1:1234", 1 "fooVolumeName": "bar", "kubernetes.io/fsType": "ext4", 2 "kubernetes.io/readwrite": "ro", 3 "kubernetes.io/secret/<key name>": "<key value>", 4 "kubernetes.io/secret/<another key name>": "<another key value>", }
OpenShift Container Platform expects JSON data on standard output of the driver. When not specified, the output describes the result of the operation.
FlexVolume driver default output example
{ "status": "<Success/Failure/Not supported>", "message": "<Reason for success/failure>" }
Exit code of the driver should be 0
for success and 1
for error.
Operations should be idempotent, which means that the mounting of an already mounted volume should result in a successful operation.
4.6.3. Installing FlexVolume drivers
FlexVolume drivers that are used to extend OpenShift Container Platform are executed only on the node. To implement FlexVolumes, a list of operations to call and the installation path are all that is required.
Prerequisites
FlexVolume drivers must implement these operations:
init
Initializes the driver. It is called during initialization of all nodes.
- Arguments: none
- Executed on: node
- Expected output: default JSON
mount
Mounts a volume to directory. This can include anything that is necessary to mount the volume, including finding the device and then mounting the device.
-
Arguments:
<mount-dir>
<json>
- Executed on: node
- Expected output: default JSON
-
Arguments:
unmount
Unmounts a volume from a directory. This can include anything that is necessary to clean up the volume after unmounting.
-
Arguments:
<mount-dir>
- Executed on: node
- Expected output: default JSON
-
Arguments:
mountdevice
- Mounts a volume’s device to a directory where individual pods can then bind mount.
This call-out does not pass "secrets" specified in the FlexVolume spec. If your driver requires secrets, do not implement this call-out.
-
Arguments:
<mount-dir>
<json>
- Executed on: node
Expected output: default JSON
unmountdevice
- Unmounts a volume’s device from a directory.
-
Arguments:
<mount-dir>
- Executed on: node
Expected output: default JSON
-
All other operations should return JSON with
{"status": "Not supported"}
and exit code1
.
-
All other operations should return JSON with
Procedure
To install the FlexVolume driver:
- Ensure that the executable file exists on all nodes in the cluster.
-
Place the executable file at the volume plugin path:
/etc/kubernetes/kubelet-plugins/volume/exec/<vendor>~<driver>/<driver>
.
For example, to install the FlexVolume driver for the storage foo
, place the executable file at: /etc/kubernetes/kubelet-plugins/volume/exec/openshift.com~foo/foo
.
4.6.4. Consuming storage using FlexVolume drivers
Each PersistentVolume
object in OpenShift Container Platform represents one storage asset in the storage back-end, such as a volume.
Procedure
-
Use the
PersistentVolume
object to reference the installed storage.
Persistent volume object definition using FlexVolume drivers example
apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce flexVolume: driver: openshift.com/foo 3 fsType: "ext4" 4 secretRef: foo-secret 5 readOnly: true 6 options: 7 fooServer: 192.168.0.1:1234 fooVolumeName: bar
- 1
- The name of the volume. This is how it is identified through persistent volume claims or from pods. This name can be different from the name of the volume on back-end storage.
- 2
- The amount of storage allocated to this volume.
- 3
- The name of the driver. This field is mandatory.
- 4
- The file system that is present on the volume. This field is optional.
- 5
- The reference to a secret. Keys and values from this secret are provided to the FlexVolume driver on invocation. This field is optional.
- 6
- The read-only flag. This field is optional.
- 7
- The additional options for the FlexVolume driver. In addition to the flags specified by the user in the
options
field, the following flags are also passed to the executable:"fsType":"<FS type>", "readwrite":"<rw>", "secret/key1":"<secret1>" ... "secret/keyN":"<secretN>"
Secrets are passed only to mount or unmount call-outs.
4.7. Persistent storage using GCE Persistent Disk
OpenShift Container Platform supports GCE Persistent Disk volumes (gcePD). You can provision your OpenShift Container Platform cluster with persistent storage using GCE. Some familiarity with Kubernetes and GCE is assumed.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure.
GCE Persistent Disk volumes can be provisioned dynamically.
Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.
OpenShift Container Platform 4.12 and later provides automatic migration for the GCE Persist Disk in-tree volume plugin to its equivalent CSI driver.
CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes.
For more information about migration, see CSI automatic migration.
High availability of storage in the infrastructure is left to the underlying storage provider.
Additional resources
4.7.1. Creating the GCE storage class
Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.
4.7.2. Creating the persistent volume claim
Prerequisites
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Procedure
-
In the OpenShift Container Platform console, click Storage
Persistent Volume Claims. - In the persistent volume claims overview, click Create Persistent Volume Claim.
Define the desired options on the page that appears.
- Select the previously-created storage class from the drop-down menu.
- Enter a unique name for the storage claim.
- Select the access mode. This selection determines the read and write access for the storage claim.
- Define the size of the storage claim.
- Click Create to create the persistent volume claim and generate a persistent volume.
4.7.3. Volume format
Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the fsType
parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system.
This verification enables you to use unformatted GCE volumes as persistent volumes, because OpenShift Container Platform formats them before the first use.
4.8. Persistent storage using iSCSI
You can provision your OpenShift Container Platform cluster with persistent storage using iSCSI. Some familiarity with Kubernetes and iSCSI is assumed.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure.
High-availability of storage in the infrastructure is left to the underlying storage provider.
When you use iSCSI on Amazon Web Services, you must update the default security policy to include TCP traffic between nodes on the iSCSI ports. By default, they are ports 860
and 3260
.
Users must ensure that the iSCSI initiator is already configured on all OpenShift Container Platform nodes by installing the iscsi-initiator-utils
package and configuring their initiator name in /etc/iscsi/initiatorname.iscsi
. The iscsi-initiator-utils
package is already installed on deployments that use Red Hat Enterprise Linux CoreOS (RHCOS).
For more information, see Managing Storage Devices.
4.8.1. Provisioning
Verify that the storage exists in the underlying infrastructure before mounting it as a volume in OpenShift Container Platform. All that is required for the iSCSI is the iSCSI target portal, a valid iSCSI Qualified Name (IQN), a valid LUN number, the filesystem type, and the PersistentVolume
API.
PersistentVolume
object definition
apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81:3260 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4'
4.8.2. Enforcing disk quotas
Use LUN partitions to enforce disk quotas and size constraints. Each LUN is one persistent volume. Kubernetes enforces unique names for persistent volumes.
Enforcing quotas in this way allows the end user to request persistent storage by a specific amount (for example, 10Gi
) and be matched with a corresponding volume of equal or greater capacity.
4.8.3. iSCSI volume security
Users request storage with a PersistentVolumeClaim
object. This claim only lives in the user’s namespace and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume claim across a namespace causes the pod to fail.
Each iSCSI LUN must be accessible by all nodes in the cluster.
4.8.3.1. Challenge Handshake Authentication Protocol (CHAP) configuration
Optionally, OpenShift Container Platform can use CHAP to authenticate itself to iSCSI targets:
apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 chapAuthDiscovery: true 1 chapAuthSession: true 2 secretRef: name: chap-secret 3
4.8.4. iSCSI multipathing
For iSCSI-based storage, you can configure multiple paths by using the same IQN for more than one target portal IP address. Multipathing ensures access to the persistent volume when one or more of the components in a path fail.
To specify multi-paths in the pod specification, use the portals
field. For example:
apiVersion: v1
kind: PersistentVolume
metadata:
name: iscsi-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
iscsi:
targetPortal: 10.0.0.1:3260
portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1
iqn: iqn.2016-04.test.com:storage.target00
lun: 0
fsType: ext4
readOnly: false
- 1
- Add additional target portals using the
portals
field.
4.8.5. iSCSI custom initiator IQN
Configure the custom initiator iSCSI Qualified Name (IQN) if the iSCSI targets are restricted to certain IQNs, but the nodes that the iSCSI PVs are attached to are not guaranteed to have these IQNs.
To specify a custom initiator IQN, use initiatorName
field.
apiVersion: v1
kind: PersistentVolume
metadata:
name: iscsi-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
iscsi:
targetPortal: 10.0.0.1:3260
portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260']
iqn: iqn.2016-04.test.com:storage.target00
lun: 0
initiatorName: iqn.2016-04.test.com:custom.iqn 1
fsType: ext4
readOnly: false
- 1
- Specify the name of the initiator.
4.9. Persistent storage using NFS
OpenShift Container Platform clusters can be provisioned with persistent storage using NFS. Persistent volumes (PVs) and persistent volume claims (PVCs) provide a convenient method for sharing a volume across a project. While the NFS-specific information contained in a PV definition could also be defined directly in a Pod
definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts.
Additional resources
4.9.1. Provisioning
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. To provision NFS volumes, a list of NFS servers and export paths are all that is required.
Procedure
Create an object definition for the PV:
apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Retain 7
- 1
- The name of the volume. This is the PV identity in various
oc <command> pod
commands. - 2
- The amount of storage allocated to this volume.
- 3
- Though this appears to be related to controlling access to the volume, it is actually used similarly to labels and used to match a PVC to a PV. Currently, no access rules are enforced based on the
accessModes
. - 4
- The volume type being used, in this case the
nfs
plugin. - 5
- The path that is exported by the NFS server.
- 6
- The hostname or IP address of the NFS server.
- 7
- The reclaim policy for the PV. This defines what happens to a volume when released.
NoteEach NFS volume must be mountable by all schedulable nodes in the cluster.
Verify that the PV was created:
$ oc get pv
Example output
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s
Create a persistent volume claim that binds to the new PV:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 5Gi 2 volumeName: pv0001 storageClassName: ""
Verify that the persistent volume claim was created:
$ oc get pvc
Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m
4.9.2. Enforcing disk quotas
You can use disk partitions to enforce disk quotas and size constraints. Each partition can be its own export. Each export is one PV. OpenShift Container Platform enforces unique names for PVs, but the uniqueness of the NFS volume’s server and path is up to the administrator.
Enforcing quotas in this way allows the developer to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity.
4.9.3. NFS volume security
This section covers NFS volume security, including matching permissions and SELinux considerations. The user is expected to understand the basics of POSIX permissions, process UIDs, supplemental groups, and SELinux.
Developers request NFS storage by referencing either a PVC by name or the NFS volume plugin directly in the volumes
section of their Pod
definition.
The /etc/exports
file on the NFS server contains the accessible NFS directories. The target NFS directory has POSIX owner and group IDs. The OpenShift Container Platform NFS plugin mounts the container’s NFS directory with the same POSIX ownership and permissions found on the exported NFS directory. However, the container is not run with its effective UID equal to the owner of the NFS mount, which is the desired behavior.
As an example, if the target NFS directory appears on the NFS server as:
$ ls -lZ /opt/nfs -d
Example output
drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs
$ id nfsnobody
Example output
uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)
Then the container must match SELinux labels, and either run with a UID of 65534
, the nfsnobody
owner, or with 5555
in its supplemental groups to access the directory.
The owner ID of 65534
is used as an example. Even though NFS’s root_squash
maps root
, uid 0
, to nfsnobody
, uid 65534
, NFS exports can have arbitrary owner IDs. Owner 65534
is not required for NFS exports.
4.9.3.1. Group IDs
The recommended way to handle NFS access, assuming it is not an option to change permissions on the NFS export, is to use supplemental groups. Supplemental groups in OpenShift Container Platform are used for shared storage, of which NFS is an example. In contrast, block storage such as iSCSI uses the fsGroup
SCC strategy and the fsGroup
value in the securityContext
of the pod.
To gain access to persistent storage, it is generally preferable to use supplemental group IDs versus user IDs.
Because the group ID on the example target NFS directory is 5555
, the pod can define that group ID using supplementalGroups
under the securityContext
definition of the pod. For example:
spec: containers: - name: ... securityContext: 1 supplementalGroups: [5555] 2
Assuming there are no custom SCCs that might satisfy the pod requirements, the pod likely matches the restricted
SCC. This SCC has the supplementalGroups
strategy set to RunAsAny
, meaning that any supplied group ID is accepted without range checking.
As a result, the above pod passes admissions and is launched. However, if group ID range checking is desired, a custom SCC is the preferred solution. A custom SCC can be created such that minimum and maximum group IDs are defined, group ID range checking is enforced, and a group ID of 5555
is allowed.
To use a custom SCC, you must first add it to the appropriate service account. For example, use the default
service account in the given project unless another has been specified on the Pod
specification.
4.9.3.2. User IDs
User IDs can be defined in the container image or in the Pod
definition.
It is generally preferable to use supplemental group IDs to gain access to persistent storage versus using user IDs.
In the example target NFS directory shown above, the container needs its UID set to 65534
, ignoring group IDs for the moment, so the following can be added to the Pod
definition:
spec: containers: 1 - name: ... securityContext: runAsUser: 65534 2
Assuming that the project is default
and the SCC is restricted
, the user ID of 65534
as requested by the pod is not allowed. Therefore, the pod fails for the following reasons:
-
It requests
65534
as its user ID. -
All SCCs available to the pod are examined to see which SCC allows a user ID of
65534
. While all policies of the SCCs are checked, the focus here is on user ID. -
Because all available SCCs use
MustRunAsRange
for theirrunAsUser
strategy, UID range checking is required. -
65534
is not included in the SCC or project’s user ID range.
It is generally considered a good practice not to modify the predefined SCCs. The preferred way to fix this situation is to create a custom SCC A custom SCC can be created such that minimum and maximum user IDs are defined, UID range checking is still enforced, and the UID of 65534
is allowed.
To use a custom SCC, you must first add it to the appropriate service account. For example, use the default
service account in the given project unless another has been specified on the Pod
specification.
4.9.3.3. SELinux
Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems are configured to use SELinux on remote NFS servers by default.
For non-RHEL and non-RHCOS systems, SELinux does not allow writing from a pod to a remote NFS server. The NFS volume mounts correctly but it is read-only. You will need to enable the correct SELinux permissions by using the following procedure.
Prerequisites
-
The
container-selinux
package must be installed. This package provides thevirt_use_nfs
SELinux boolean.
Procedure
Enable the
virt_use_nfs
boolean using the following command. The-P
option makes this boolean persistent across reboots.# setsebool -P virt_use_nfs 1
4.9.3.4. Export settings
To enable arbitrary container users to read and write the volume, each exported volume on the NFS server should conform to the following conditions:
Every export must be exported using the following format:
/<example_fs> *(rw,root_squash)
The firewall must be configured to allow traffic to the mount point.
For NFSv4, configure the default port
2049
(nfs).NFSv4
# iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT
For NFSv3, there are three ports to configure:
2049
(nfs),20048
(mountd), and111
(portmapper).NFSv3
# iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT
# iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT
# iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT
-
The NFS export and directory must be set up so that they are accessible by the target pods. Either set the export to be owned by the container’s primary UID, or supply the pod group access using
supplementalGroups
, as shown in the group IDs above.
4.9.4. Reclaiming resources
NFS implements the OpenShift Container Platform Recyclable
plugin interface. Automatic processes handle reclamation tasks based on policies set on each persistent volume.
By default, PVs are set to Retain
.
Once claim to a PVC is deleted, and the PV is released, the PV object should not be reused. Instead, a new PV should be created with the same basic volume details as the original.
For example, the administrator creates a PV named nfs1
:
apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: "/"
The user creates PVC1
, which binds to nfs1
. The user then deletes PVC1
, releasing claim to nfs1
. This results in nfs1
being Released
. If the administrator wants to make the same NFS share available, they should create a new PV with the same NFS server details, but a different PV name:
apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: "/"
Deleting the original PV and re-creating it with the same name is discouraged. Attempting to manually change the status of a PV from Released
to Available
causes errors and potential data loss.
4.9.5. Additional configuration and troubleshooting
Depending on what version of NFS is being used and how it is configured, there may be additional configuration steps needed for proper export and security mapping. The following are some that may apply:
NFSv4 mount incorrectly shows all files with ownership of |
|
Disabling ID mapping on NFSv4 |
|
4.10. Red Hat OpenShift Data Foundation
Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. For more information, see the Red Hat OpenShift Data Foundation documentation.
OpenShift Data Foundation on top of Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization, which uses hyperconverged nodes that host virtual machines installed with OpenShift Container Platform, is not a supported configuration. For more information about supported platforms, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Guide.
4.11. Persistent storage using VMware vSphere volumes
OpenShift Container Platform allows use of VMware vSphere’s Virtual Machine Disk (VMDK) volumes. You can provision your OpenShift Container Platform cluster with persistent storage using VMware vSphere. Some familiarity with Kubernetes and VMware vSphere is assumed.
VMware vSphere volumes can be provisioned dynamically. OpenShift Container Platform creates the disk in vSphere and attaches this disk to the correct image.
OpenShift Container Platform provisions new volumes as independent persistent disks that can freely attach and detach the volume on any node in the cluster. Consequently, you cannot back up volumes that use snapshots, or restore volumes from snapshots. See Snapshot Limitations for more information.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure.
Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.
OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision vSphere storage.
In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform.
Additional resources
4.11.1. Dynamically provisioning VMware vSphere volumes
Dynamically provisioning VMware vSphere volumes is the recommended method.
4.11.2. Prerequisites
- An OpenShift Container Platform cluster installed on a VMware vSphere version that meets the requirements for the components that you use. See Installing a cluster on vSphere for information about vSphere version support.
You can use either of the following procedures to dynamically provision these volumes using the default storage class.
4.11.2.1. Dynamically provisioning VMware vSphere volumes using the UI
OpenShift Container Platform installs a default storage class, named thin
, that uses the thin
disk format for provisioning volumes.
Prerequisites
- Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Procedure
-
In the OpenShift Container Platform console, click Storage
Persistent Volume Claims. - In the persistent volume claims overview, click Create Persistent Volume Claim.
Define the required options on the resulting page.
-
Select the
thin
storage class. - Enter a unique name for the storage claim.
- Select the access mode to determine the read and write access for the created storage claim.
- Define the size of the storage claim.
-
Select the
- Click Create to create the persistent volume claim and generate a persistent volume.
4.11.2.2. Dynamically provisioning VMware vSphere volumes using the CLI
OpenShift Container Platform installs a default StorageClass, named thin
, that uses the thin
disk format for provisioning volumes.
Prerequisites
- Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Procedure (CLI)
You can define a VMware vSphere PersistentVolumeClaim by creating a file,
pvc.yaml
, with the following contents:kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 1Gi 3
Enter the following command to create the
PersistentVolumeClaim
object from the file:$ oc create -f pvc.yaml
4.11.3. Statically provisioning VMware vSphere volumes
To statically provision VMware vSphere volumes you must create the virtual machine disks for reference by the persistent volume framework.
Prerequisites
- Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Procedure
Create the virtual machine disks. Virtual machine disks (VMDKs) must be created manually before statically provisioning VMware vSphere volumes. Use either of the following methods:
Create using
vmkfstools
. Access ESX through Secure Shell (SSH) and then use following command to create a VMDK volume:$ vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk
Create using
vmware-diskmanager
:$ shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk
Create a persistent volume that references the VMDKs. Create a file,
pv1.yaml
, with thePersistentVolume
object definition:apiVersion: v1 kind: PersistentVolume metadata: name: pv1 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: "[datastore1] volumes/myDisk" 4 fsType: ext4 5
- 1
- The name of the volume. This name is how it is identified by persistent volume claims or pods.
- 2
- The amount of storage allocated to this volume.
- 3
- The volume type used, with
vsphereVolume
for vSphere volumes. The label is used to mount a vSphere VMDK volume into pods. The contents of a volume are preserved when it is unmounted. The volume type supports VMFS and VSAN datastore. - 4
- The existing VMDK volume to use. If you used
vmkfstools
, you must enclose the datastore name in square brackets,[]
, in the volume definition, as shown previously. - 5
- The file system type to mount. For example, ext4, xfs, or other file systems.
ImportantChanging the value of the fsType parameter after the volume is formatted and provisioned can result in data loss and pod failure.
Create the
PersistentVolume
object from the file:$ oc create -f pv1.yaml
Create a persistent volume claim that maps to the persistent volume you created in the previous step. Create a file,
pvc1.yaml
, with thePersistentVolumeClaim
object definition:apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: "1Gi" 3 volumeName: pv1 4
Create the
PersistentVolumeClaim
object from the file:$ oc create -f pvc1.yaml
4.11.3.1. Formatting VMware vSphere volumes
Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system that is specified by the fsType
parameter value in the PersistentVolume
(PV) definition. If the device is not formatted with the file system, all data from the device is erased, and the device is automatically formatted with the specified file system.
Because OpenShift Container Platform formats them before the first use, you can use unformatted vSphere volumes as PVs.
4.12. Persistent storage using local storage
4.12.1. Persistent storage using local volumes
OpenShift Container Platform can be provisioned with persistent storage by using local volumes. Local persistent volumes allow you to access local storage devices, such as a disk or partition, by using the standard persistent volume claim interface.
Local volumes can be used without manually scheduling pods to nodes because the system is aware of the volume node constraints. However, local volumes are still subject to the availability of the underlying node and are not suitable for all applications.
Local volumes can only be used as a statically created persistent volume.
4.12.1.1. Installing the Local Storage Operator
The Local Storage Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install and configure this Operator to enable local volumes in your cluster.
Prerequisites
- Access to the OpenShift Container Platform web console or command-line interface (CLI).
Procedure
Create the
openshift-local-storage
project:$ oc adm new-project openshift-local-storage
Optional: Allow local storage creation on infrastructure nodes.
You might want to use the Local Storage Operator to create volumes on infrastructure nodes in support of components such as logging and monitoring.
You must adjust the default node selector so that the Local Storage Operator includes the infrastructure nodes, and not just worker nodes.
To block the Local Storage Operator from inheriting the cluster-wide default selector, enter the following command:
$ oc annotate namespace openshift-local-storage openshift.io/node-selector=''
Optional: Allow local storage to run on the management pool of CPUs in single-node deployment.
Use the Local Storage Operator in single-node deployments and allow the use of CPUs that belong to the
management
pool. Perform this step on single-node installations that use management workload partitioning.To allow Local Storage Operator to run on the management CPU pool, run following commands:
$ oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management'
From the UI
To install the Local Storage Operator from the web console, follow these steps:
- Log in to the OpenShift Container Platform web console.
-
Navigate to Operators
OperatorHub. - Type Local Storage into the filter box to locate the Local Storage Operator.
- Click Install.
- On the Install Operator page, select A specific namespace on the cluster. Select openshift-local-storage from the drop-down menu.
- Adjust the values for Update Channel and Approval Strategy to the values that you want.
- Click Install.
Once finished, the Local Storage Operator will be listed in the Installed Operators section of the web console.
From the CLI
Install the Local Storage Operator from the CLI.
Create an object YAML file to define an Operator group and subscription for the Local Storage Operator, such as
openshift-local-storage.yaml
:Example openshift-local-storage.yaml
apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: stable installPlanApproval: Automatic 1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace
- 1
- The user approval policy for an install plan.
Create the Local Storage Operator object by entering the following command:
$ oc apply -f openshift-local-storage.yaml
At this point, the Operator Lifecycle Manager (OLM) is now aware of the Local Storage Operator. A ClusterServiceVersion (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.
Verify local storage installation by checking that all pods and the Local Storage Operator have been created:
Check that all the required pods have been created:
$ oc -n openshift-local-storage get pods
Example output
NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m
Check the ClusterServiceVersion (CSV) YAML manifest to see that the Local Storage Operator is available in the
openshift-local-storage
project:$ oc get csvs -n openshift-local-storage
Example output
NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded
After all checks have passed, the Local Storage Operator is installed successfully.
4.12.1.2. Provisioning local volumes by using the Local Storage Operator
Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by the Local Storage Operator. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource.
Prerequisites
- The Local Storage Operator is installed.
You have a local disk that meets the following conditions:
- It is attached to a node.
- It is not mounted.
- It does not contain partitions.
Procedure
Create the local volume resource. This resource must define the nodes and paths to the local volumes.
NoteDo not use different storage class names for the same device. Doing so will create multiple persistent volumes (PVs).
Example: Filesystem
apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: "local-sc" 3 volumeMode: Filesystem 4 fsType: xfs 5 devicePaths: 6 - /path/to/device 7
- 1
- The namespace where the Local Storage Operator is installed.
- 2
- Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from
oc get node
. If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. - 3
- The name of the storage class to use when creating persistent volume objects. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes.
- 4
- The volume mode, either
Filesystem
orBlock
, that defines the type of local volumes.NoteA raw block volume (
volumeMode: Block
) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices. - 5
- The file system that is created when the local volume is mounted for the first time.
- 6
- The path containing a list of local storage devices to choose from.
- 7
- Replace this value with your actual local disks filepath to the
LocalVolume
resourceby-id
, such as/dev/disk/by-id/wwn
. PVs are created for these local disks when the provisioner is deployed successfully.NoteIf you are running OpenShift Container Platform with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the
virsh edit <VM>
command to add the<serial>mydisk</serial>
definition.
Example: Block
apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: "local-sc" 3 volumeMode: Block 4 devicePaths: 5 - /path/to/device 6
- 1
- The namespace where the Local Storage Operator is installed.
- 2
- Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from
oc get node
. If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. - 3
- The name of the storage class to use when creating persistent volume objects.
- 4
- The volume mode, either
Filesystem
orBlock
, that defines the type of local volumes. - 5
- The path containing a list of local storage devices to choose from.
- 6
- Replace this value with your actual local disks filepath to the
LocalVolume
resourceby-id
, such asdev/disk/by-id/wwn
. PVs are created for these local disks when the provisioner is deployed successfully.
NoteIf you are running OpenShift Container Platform with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the
virsh edit <VM>
command to add the<serial>mydisk</serial>
definition.Create the local volume resource in your OpenShift Container Platform cluster. Specify the file you just created:
$ oc create -f <local-volume>.yaml
Verify that the provisioner was created and that the corresponding daemon sets were created:
$ oc get all -n openshift-local-storage
Example output
NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14m
Note the desired and current number of daemon set processes. A desired count of
0
indicates that the label selectors were invalid.Verify that the persistent volumes were created:
$ oc get pv
Example output
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m
Editing the LocalVolume
object does not change the fsType
or volumeMode
of existing persistent volumes because doing so might result in a destructive operation.
4.12.1.3. Provisioning local volumes without the Local Storage Operator
Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by defining the persistent volume (PV) in an object definition. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource.
Manual provisioning of PVs includes the risk of potential data leaks across PV reuse when PVCs are deleted. The Local Storage Operator is recommended for automating the life cycle of devices when provisioning local PVs.
Prerequisites
- Local disks are attached to the OpenShift Container Platform nodes.
Procedure
Define the PV. Create a file, such as
example-pv-filesystem.yaml
orexample-pv-block.yaml
, with thePersistentVolume
object definition. This resource must define the nodes and paths to the local volumes.NoteDo not use different storage class names for the same device. Doing so will create multiple PVs.
example-pv-filesystem.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-sc 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node
- 1
- The volume mode, either
Filesystem
orBlock
, that defines the type of PVs. - 2
- The name of the storage class to use when creating PV resources. Use a storage class that uniquely identifies this set of PVs.
- 3
- The path containing a list of local storage devices to choose from, or a directory. You can only specify a directory with
Filesystem
volumeMode
.
NoteA raw block volume (
volumeMode: block
) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices.example-pv-block.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-sc 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node
Create the PV resource in your OpenShift Container Platform cluster. Specify the file you just created:
$ oc create -f <example-pv>.yaml
Verify that the local PV was created:
$ oc get pv
Example output
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-sc 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-sc 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-sc 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-sc 12h
4.12.1.4. Creating the local volume persistent volume claim
Local volumes must be statically created as a persistent volume claim (PVC) to be accessed by the pod.
Prerequisites
- Persistent volumes have been created using the local volume provisioner.
Procedure
Create the PVC using the corresponding storage class:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name 1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem 2 resources: requests: storage: 100Gi 3 storageClassName: local-sc 4
Create the PVC in the OpenShift Container Platform cluster, specifying the file you just created:
$ oc create -f <local-pvc>.yaml
4.12.1.5. Attach the local claim
After a local volume has been mapped to a persistent volume claim it can be specified inside of a resource.
Prerequisites
- A persistent volume claim exists in the same namespace.
Procedure
Include the defined claim in the resource spec. The following example declares the persistent volume claim inside a pod:
apiVersion: v1 kind: Pod spec: # ... containers: volumeMounts: - name: local-disks 1 mountPath: /data 2 volumes: - name: local-disks persistentVolumeClaim: claimName: local-pvc-name 3 # ...
- 1
- The name of the volume to mount.
- 2
- The path inside the pod where the volume is mounted. Do not mount to the container root,
/
, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host/dev/pts
files. It is safe to mount the host by using/host
. - 3
- The name of the existing persistent volume claim to use.
Create the resource in the OpenShift Container Platform cluster, specifying the file you just created:
$ oc create -f <local-pod>.yaml
4.12.1.6. Automating discovery and provisioning for local storage devices
The Local Storage Operator automates local storage discovery and provisioning. With this feature, you can simplify installation when dynamic provisioning is not available during deployment, such as with bare metal, VMware, or AWS store instances with attached devices.
Automatic discovery and provisioning is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Automatic discovery and provisioning is fully supported when used to deploy Red Hat OpenShift Data Foundation on-premise or with platform-agnostic deployment.
Use the following procedure to automatically discover local devices, and to automatically provision local volumes for selected devices.
Use the LocalVolumeSet
object with caution. When you automatically provision persistent volumes (PVs) from local disks, the local PVs might claim all devices that match. If you are using a LocalVolumeSet
object, make sure the Local Storage Operator is the only entity managing local devices on the node. Creating multiple instances of a LocalVolumeSet
that target a node more than once is not supported.
Prerequisites
- You have cluster administrator permissions.
- You have installed the Local Storage Operator.
- You have attached local disks to OpenShift Container Platform nodes.
-
You have access to the OpenShift Container Platform web console and the
oc
command-line interface (CLI).
Procedure
To enable automatic discovery of local devices from the web console:
-
Click Operators
Installed Operators. -
In the
openshift-local-storage
namespace, click Local Storage. - Click the Local Volume Discovery tab.
- Click Create Local Volume Discovery and then select either Form view or YAML view.
-
Configure the
LocalVolumeDiscovery
object parameters. Click Create.
The Local Storage Operator creates a local volume discovery instance named
auto-discover-devices
.
-
Click Operators
To display a continuous list of available devices on a node:
- Log in to the OpenShift Container Platform web console.
-
Navigate to Compute
Nodes. - Click the node name that you want to open. The "Node Details" page is displayed.
Select the Disks tab to display the list of the selected devices.
The device list updates continuously as local disks are added or removed. You can filter the devices by name, status, type, model, capacity, and mode.
To automatically provision local volumes for the discovered devices from the web console:
-
Navigate to Operators
Installed Operators and select Local Storage from the list of Operators. -
Select Local Volume Set
Create Local Volume Set. - Enter a volume set name and a storage class name.
Choose All nodes or Select nodes to apply filters accordingly.
NoteOnly worker nodes are available, regardless of whether you filter using All nodes or Select nodes.
Select the disk type, mode, size, and limit you want to apply to the local volume set, and click Create.
A message displays after several minutes, indicating that the "Operator reconciled successfully."
-
Navigate to Operators
Alternatively, to provision local volumes for the discovered devices from the CLI:
Create an object YAML file to define the local volume set, such as
local-volume-set.yaml
, as shown in the following example:apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: local-sc 1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes: 2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM
- 1
- Determines the storage class that is created for persistent volumes that are provisioned from discovered devices. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes.
- 2
- When using the local volume set feature, the Local Storage Operator does not support the use of logical volume management (LVM) devices.
Create the local volume set object:
$ oc apply -f local-volume-set.yaml
Verify that the local persistent volumes were dynamically provisioned based on the storage class:
$ oc get pv
Example output
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m
Results are deleted after they are removed from the node. Symlinks must be manually removed.
4.12.1.7. Using tolerations with Local Storage Operator pods
Taints can be applied to nodes to prevent them from running general workloads. To allow the Local Storage Operator to use tainted nodes, you must add tolerations to the Pod
or DaemonSet
definition. This allows the created resources to run on these tainted nodes.
You apply tolerations to the Local Storage Operator pod through the LocalVolume
resource and apply taints to a node through the node specification. A taint on a node instructs the node to repel all pods that do not tolerate the taint. Using a specific taint that is not on other pods ensures that the Local Storage Operator pod can also run on that node.
Taints and tolerations consist of a key, value, and effect. As an argument, it is expressed as key=value:effect
. An operator allows you to leave one of these parameters empty.
Prerequisites
- The Local Storage Operator is installed.
- Local disks are attached to OpenShift Container Platform nodes with a taint.
- Tainted nodes are expected to provision local storage.
Procedure
To configure local volumes for scheduling on tainted nodes:
Modify the YAML file that defines the
Pod
and add theLocalVolume
spec, as shown in the following example:apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" spec: tolerations: - key: localstorage 1 operator: Equal 2 value: "localstorage" 3 storageClassDevices: - storageClassName: "local-sc" volumeMode: Block 4 devicePaths: 5 - /dev/xvdg
- 1
- Specify the key that you added to the node.
- 2
- Specify the
Equal
operator to require thekey
/value
parameters to match. If operator isExists
, the system checks that the key exists and ignores the value. If operator isEqual
, then the key and value must match. - 3
- Specify the value
local
of the tainted node. - 4
- The volume mode, either
Filesystem
orBlock
, defining the type of the local volumes. - 5
- The path containing a list of local storage devices to choose from.
Optional: To create local persistent volumes on only tainted nodes, modify the YAML file and add the
LocalVolume
spec, as shown in the following example:spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists
The defined tolerations will be passed to the resulting daemon sets, allowing the diskmaker and provisioner pods to be created for nodes that contain the specified taints.
4.12.1.8. Local Storage Operator Metrics
OpenShift Container Platform provides the following metrics for the Local Storage Operator:
-
lso_discovery_disk_count
: total number of discovered devices on each node -
lso_lvset_provisioned_PV_count
: total number of PVs created byLocalVolumeSet
objects -
lso_lvset_unmatched_disk_count
: total number of disks that Local Storage Operator did not select for provisioning because of mismatching criteria -
lso_lvset_orphaned_symlink_count
: number of devices with PVs that no longer matchLocalVolumeSet
object criteria -
lso_lv_orphaned_symlink_count
: number of devices with PVs that no longer matchLocalVolume
object criteria -
lso_lv_provisioned_PV_count
: total number of provisioned PVs forLocalVolume
To use these metrics, be sure to:
- Enable support for monitoring when installing the Local Storage Operator.
-
When upgrading to OpenShift Container Platform 4.9 or later, enable metric support manually by adding the
operator-metering=true
label to the namespace.
For more information about metrics, see Managing metrics.
4.12.1.9. Deleting the Local Storage Operator resources
4.12.1.9.1. Removing a local volume or local volume set
Occasionally, local volumes and local volume sets must be deleted. While removing the entry in the resource and deleting the persistent volume is typically enough, if you want to reuse the same device path or have it managed by a different storage class, then additional steps are needed.
The following procedure outlines an example for removing a local volume. The same procedure can also be used to remove symlinks for a local volume set custom resource.
Prerequisites
The persistent volume must be in a
Released
orAvailable
state.WarningDeleting a persistent volume that is still in use can result in data loss or corruption.
Procedure
Edit the previously created local volume to remove any unwanted disks.
Edit the cluster resource:
$ oc edit localvolume <name> -n openshift-local-storage
-
Navigate to the lines under
devicePaths
, and delete any representing unwanted disks.
Delete any persistent volumes created.
$ oc delete pv <pv-name>
Delete directory and included symlinks on the node.
WarningThe following step involves accessing a node as the root user. Modifying the state of the node beyond the steps in this procedure could result in cluster instability.
$ oc debug node/<node-name> -- chroot /host rm -rf /mnt/local-storage/<sc-name> 1
- 1
- The name of the storage class used to create the local volumes.
4.12.1.9.2. Uninstalling the Local Storage Operator
To uninstall the Local Storage Operator, you must remove the Operator and all created resources in the openshift-local-storage
project.
Uninstalling the Local Storage Operator while local storage PVs are still in use is not recommended. While the PVs will remain after the Operator’s removal, there might be indeterminate behavior if the Operator is uninstalled and reinstalled without removing the PVs and local storage resources.
Prerequisites
- Access to the OpenShift Container Platform web console.
Procedure
Delete any local volume resources installed in the project, such as
localvolume
,localvolumeset
, andlocalvolumediscovery
:$ oc delete localvolume --all --all-namespaces $ oc delete localvolumeset --all --all-namespaces $ oc delete localvolumediscovery --all --all-namespaces
Uninstall the Local Storage Operator from the web console.
- Log in to the OpenShift Container Platform web console.
-
Navigate to Operators
Installed Operators. - Type Local Storage into the filter box to locate the Local Storage Operator.
- Click the Options menu at the end of the Local Storage Operator.
- Click Uninstall Operator.
- Click Remove in the window that appears.
The PVs created by the Local Storage Operator will remain in the cluster until deleted. After these volumes are no longer in use, delete them by running the following command:
$ oc delete pv <pv-name>
Delete the
openshift-local-storage
project:$ oc delete project openshift-local-storage
4.12.2. Persistent storage using hostPath
A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node’s filesystem into your pod. Most pods will not need a hostPath volume, but it does offer a quick option for testing should an application require it.
The cluster administrator must configure pods to run as privileged. This grants access to pods in the same node.
4.12.2.1. Overview
OpenShift Container Platform supports hostPath mounting for development and testing on a single-node cluster.
In a production cluster, you would not use hostPath. Instead, a cluster administrator would provision a network resource, such as a GCE Persistent Disk volume, an NFS share, or an Amazon EBS volume. Network resources support the use of storage classes to set up dynamic provisioning.
A hostPath volume must be provisioned statically.
Do not mount to the container root, /
, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged. It is safe to mount the host by using /host
. The following example shows the /
directory from the host being mounted into the container at /host
.
apiVersion: v1 kind: Pod metadata: name: test-host-mount spec: containers: - image: registry.access.redhat.com/ubi8/ubi name: test-container command: ['sh', '-c', 'sleep 3600'] volumeMounts: - mountPath: /host name: host-slash volumes: - name: host-slash hostPath: path: / type: ''
4.12.2.2. Statically provisioning hostPath volumes
A pod that uses a hostPath volume must be referenced by manual (static) provisioning.
Procedure
Define the persistent volume (PV) by creating a
pv.yaml
file with thePersistentVolume
object definition:apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume 1 labels: type: local spec: storageClassName: manual 2 capacity: storage: 5Gi accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain hostPath: path: "/mnt/data" 4
- 1
- The name of the volume. This name is how the volume is identified by persistent volume (PV) claims or pods.
- 2
- Used to bind persistent volume claim (PVC) requests to the PV.
- 3
- The volume can be mounted as
read-write
by a single node. - 4
- The configuration file specifies that the volume is at
/mnt/data
on the cluster’s node. To avoid corrupting your host system, do not mount to the container root,/
, or any path that is the same in the host and the container. You can safely mount the host by using/host
Create the PV from the file:
$ oc create -f pv.yaml
Define the PVC by creating a
pvc.yaml
file with thePersistentVolumeClaim
object definition:apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manual
Create the PVC from the file:
$ oc create -f pvc.yaml
4.12.2.3. Mounting the hostPath share in a privileged pod
After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod.
Prerequisites
- A persistent volume claim exists that is mapped to the underlying hostPath share.
Procedure
Create a privileged pod that mounts the existing persistent volume claim:
apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: ... securityContext: privileged: true 2 volumeMounts: - mountPath: /data 3 name: hostpath-privileged ... securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume 4
- 1
- The name of the pod.
- 2
- The pod must run as privileged to access the node’s storage.
- 3
- The path to mount the host path share inside the privileged pod. Do not mount to the container root,
/
, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host/dev/pts
files. It is safe to mount the host by using/host
. - 4
- The name of the
PersistentVolumeClaim
object that has been previously created.
4.12.3. Persistent storage using Logical Volume Manager Storage
Logical Volume Manager (LVM) Storage uses the TopoLVM CSI driver to dynamically provision local storage on single-node OpenShift clusters.
LVM Storage creates thin-provisioned volumes using Logical Volume Manager and provides dynamic provisioning of block storage on a limited resources single-node OpenShift cluster.
You can create volume groups, persistent volume claims (PVCs), volume snapshots, and volume clones by using LVM Storage.
4.12.3.1. Logical Volume Manager Storage installation
You can install Logical Volume Manager (LVM) Storage on a single-node OpenShift cluster and configure it to dynamically provision storage for your workloads.
You can deploy LVM Storage on single-node OpenShift clusters by using the OpenShift Container Platform CLI (oc
), OpenShift Container Platform web console, or Red Hat Advanced Cluster Management (RHACM).
4.12.3.1.1. Prerequisites to install LVM Storage
The prerequisites to install LVM Storage are as follows:
- Ensure that you have a minimum of 10 milliCPU and 100 MiB of RAM.
- Ensure that every managed cluster has dedicated disks that are used to provision storage. LVM Storage uses only those disks that are empty and do not contain file system signatures. To ensure that the disks are empty and do not contain file system signatures, wipe the disks before using them.
Before installing LVM Storage in a private CI environment where you can reuse the storage devices that you configured in the previous LVM Storage installation, ensure that you have wiped the disks that are not in use. If you do not wipe the disks before installing LVM Storage, you cannot reuse the disks without manual intervention.
NoteYou cannot wipe the disks that are in use.
- If you want to install LVM Storage by using Red Hat Advanced Cluster Management (RHACM), ensure that you have installed RHACM on an OpenShift Container Platform cluster. See the Installing LVM Storage using RHACM section.
4.12.3.1.2. Installing LVM Storage using OpenShift Container Platform Web Console
You can install LVM Storage using the Red Hat OpenShift Container Platform OperatorHub.
Prerequisites
- You have access to the single-node OpenShift cluster.
-
You are using an account with the
cluster-admin
and Operator installation permissions.
Procedure
- Log in to the OpenShift Container Platform Web Console.
-
Click Operators
OperatorHub. -
Scroll or type
LVM Storage
into the Filter by keyword box to find LVM Storage. - Click Install.
Set the following options on the Install Operator page:
- Update Channel as stable-4.12.
- Installation Mode as A specific namespace on the cluster.
-
Installed Namespace as Operator recommended namespace openshift-storage. If the
openshift-storage
namespace does not exist, it is created during the operator installation. Approval Strategy as Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
- Click Install.
Verification steps
- Verify that LVM Storage shows a green tick, indicating successful installation.
4.12.3.1.3. Uninstalling LVM Storage installed using the OpenShift Web Console
You can uninstall LVM Storage using the Red Hat OpenShift Container Platform Web Console.
Prerequisites
- You deleted all the applications on the clusters that are using the storage provisioned by LVM Storage.
- You deleted the persistent volume claims (PVCs) and persistent volumes (PVs) provisioned using LVM Storage.
- You deleted all volume snapshots provisioned by LVM Storage.
-
You verified that no logical volume resources exist by using the
oc get logicalvolume
command. -
You have access to the single-node OpenShift cluster using an account with
cluster-admin
permissions.
Procedure
-
From the Operators
Installed Operators page, scroll to LVM Storage or type LVM Storage
into the Filter by name to find and click on it. - Click on the LVMCluster tab.
- On the right-hand side of the LVMCluster page, select Delete LVMCluster from the Actions drop-down menu.
- Click on the Details tab.
- On the right-hand side of the Operator Details page, select Uninstall Operator from the Actions drop-down menu.
- Select Remove. LVM Storage stops running and is completely removed.
4.12.3.1.4. Installing LVM Storage using RHACM
LVM Storage is deployed on single-node OpenShift clusters using Red Hat Advanced Cluster Management (RHACM). You create a Policy
object on RHACM that deploys and configures the Operator when it is applied to managed clusters which match the selector specified in the PlacementRule
resource. The policy is also applied to clusters that are imported later and satisfy the placement rule.
Prerequisites
-
Access to the RHACM cluster using an account with
cluster-admin
and Operator installation permissions. - Dedicated disks on each single-node OpenShift cluster to be used by LVM Storage.
- The single-node OpenShift cluster needs to be managed by RHACM, either imported or created.
Procedure
- Log in to the RHACM CLI using your OpenShift Container Platform credentials.
Create a namespace in which you will create policies.
# oc create ns lvms-policy-ns
To create a policy, save the following YAML to a file with a name such as
policy-lvms-operator.yaml
:apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-install-lvms spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: 1 matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-install-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-install-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: install-lvms spec: disabled: false remediationAction: enforce policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-lvms spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-storage - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace remediationAction: enforce severity: low - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: - name: vg1 deviceSelector: 2 paths: - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10 nodeSelector: 3 nodeSelectorTerms: - matchExpressions: - key: app operator: In values: - test1 remediationAction: enforce severity: low
- 1
- Replace the key and value in
PlacementRule.spec.clusterSelector
to match the labels set on the single-node OpenShift clusters on which you want to install LVM Storage. - 2
- To control or restrict the volume group to your preferred disks, you can manually specify the local paths of the disks in the
deviceSelector
section of theLVMCluster
YAML. - 3
- To add a node filter, which is a subset of the additional worker nodes, specify the required filter in the
nodeSelector
section. LVM Storage detects and uses the additional worker nodes when the new nodes show up.
ImportantThis
nodeSelector
node filter matching is not the same as the pod label matching.Create the policy in the namespace by running the following command:
# oc create -f policy-lvms-operator.yaml -n lvms-policy-ns 1
- 1
- The
policy-lvms-operator.yaml
is the name of the file to which the policy is saved.
This creates a
Policy
, aPlacementRule
, and aPlacementBinding
object in thelvms-policy-ns
namespace. The policy creates aNamespace
,OperatorGroup
,Subscription
, andLVMCluster
resource on the clusters that match the placement rule. This deploys the Operator on the single-node OpenShift clusters which match the selection criteria and configures it to set up the required resources to provision storage. The Operator uses all the disks specified in theLVMCluster
CR. If no disks are specified, the Operator uses all the unused disks on the single-node OpenShift node.ImportantAfter a device is added to the
LVMCluster
, it cannot be removed.
4.12.3.1.5. Uninstalling LVM Storage installed using RHACM
To uninstall LVM Storage that you installed using RHACM, you need to delete the RHACM policy that you created for deploying and configuring the Operator.
When you delete the RHACM policy, the resources that the policy has created are not removed. You need to create additional policies to remove the resources.
As the created resources are not removed when you delete the policy, you need to perform the following steps:
- Remove all the Persistent volume claims (PVCs) and volume snapshots provisioned by LVM Storage.
-
Remove the
LVMCluster
resources to clean up Logical Volume Manager resources created on the disks. - Create an additional policy to uninstall the Operator.
Prerequisites
Ensure that the following are deleted before deleting the policy:
- All the applications on the managed clusters that are using the storage provisioned by LVM Storage.
- PVCs and persistent volumes (PVs) provisioned using LVM Storage.
- All volume snapshots provisioned by LVM Storage.
-
Ensure you have access to the RHACM cluster using an account with a
cluster-admin
role.
Procedure
In the OpenShift CLI (
oc
), delete the RHACM policy that you created for deploying and configuring LVM Storage on the hub cluster by using the following command:# oc delete -f policy-lvms-operator.yaml -n lvms-policy-ns 1
- 1
- The
policy-lvms-operator.yaml
is the name of the file to which the policy was saved.
To create a policy for removing the
LVMCluster
resource, save the following YAML to a file with a name such aslvms-remove-policy.yaml
. This enables the Operator to clean up all Logical Volume Manager resources that it created on the cluster.apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-lvmcluster-delete annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-lvmcluster-removal spec: remediationAction: enforce 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: kind: LVMCluster apiVersion: lvm.topolvm.io/v1alpha1 metadata: name: my-lvmcluster namespace: openshift-storage 2 --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-lvmcluster-delete placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-policy-lvmcluster-delete subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-lvmcluster-delete --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-lvmcluster-delete spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue
-
Set the value of the
PlacementRule.spec.clusterSelector
field to select the clusters from which to uninstall LVM Storage. Create the policy by running the following command:
# oc create -f lvms-remove-policy.yaml -n lvms-policy-ns
To create a policy to check if the
LVMCluster
CR has been removed, save the following YAML to a file with a name such ascheck-lvms-remove-policy.yaml
:apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-lvmcluster-inform annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-lvmcluster-removal-inform spec: remediationAction: inform 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: kind: LVMCluster apiVersion: lvm.topolvm.io/v1alpha1 metadata: name: my-lvmcluster namespace: openshift-storage 2 --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-lvmcluster-check placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-policy-lvmcluster-check subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-lvmcluster-inform --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-lvmcluster-check spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue
Create the policy by running the following command:
# oc create -f check-lvms-remove-policy.yaml -n lvms-policy-ns
Check the policy status by running the following command:
# oc get policy -n lvms-policy-ns
Example output
NAME REMEDIATION ACTION COMPLIANCE STATE AGE policy-lvmcluster-delete enforce Compliant 15m policy-lvmcluster-inform inform Compliant 15m
After both the policies are compliant, save the following YAML to a file with a name such as
lvms-uninstall-policy.yaml
to create a policy to uninstall LVM Storage.apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-uninstall-lvms spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-uninstall-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-uninstall-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: uninstall-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: uninstall-lvms spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: uninstall-lvms spec: object-templates: - complianceType: mustnothave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-storage - complianceType: mustnothave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage - complianceType: mustnothave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms-operator namespace: openshift-storage remediationAction: enforce severity: low - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-remove-lvms-crds spec: object-templates: - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: logicalvolumes.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmclusters.lvm.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmvolumegroupnodestatuses.lvm.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmvolumegroups.lvm.topolvm.io remediationAction: enforce severity: high
Create the policy by running the following command:
# oc create -f lvms-uninstall-policy.yaml -ns lvms-policy-ns
Additional resources
4.12.3.2. Limitations to configure the size of the devices used in LVM Storage
The limitations to configure the size of the devices that you can use to provision storage using LVM Storage are as follows:
- The total storage size that you can provision is limited by the size of the underlying Logical Volume Manager (LVM) thin pool and the over-provisioning factor.
The size of the logical volume depends on the size of the Physical Extent (PE) and the Logical Extent (LE).
- You can define the size of PE and LE during the physical and logical device creation.
- The default PE and LE size is 4 MB.
- If the size of the PE is increased, the maximum size of the LVM is determined by the kernel limits and your disk space.
Architecture | RHEL 6 | RHEL 7 | RHEL 8 | RHEL 9 |
---|---|---|---|---|
32-bit | 16 TB | - | - | - |
64-bit | 8 EB [1] 100 TB [2] | 8 EB [1] 500 TB [2] | 8 EB | 8 EB |
- Theoretical size.
- Tested size.
4.12.3.3. Creating Logical Volume Manager cluster
You can create a Logical Volume Manager cluster after you install LVM Storage.
OpenShift Container Platform supports additional worker nodes for single-node OpenShift clusters on bare-metal user-provisioned infrastructure. LVM Storage detects and uses the additional worker nodes when the new nodes show up. In case you need to set a node filter for the additional worker nodes, you can use the YAML view while creating the cluster.
You can only create a single instance of the LVMCluster
custom resource (CR) on an OpenShift Container Platform cluster.
This node filter matching is not the same as the pod label matching.
Prerequisites
- You installed LVM Storage from the OperatorHub.
Procedure
In the OpenShift Container Platform Web Console, click Operators
Installed Operators to view all the installed Operators. Ensure that the Project selected is
openshift-storage
.- Click on LVM Storage, and then click Create LVMCluster under LVMCluster.
- In the Create LVMCluster page, select either Form view or YAML view.
- Enter a name for the cluster.
- Click Create.
Optional: To add a node filter, click YAML view and specify the filter in the
nodeSelector
section:apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10 nodeSelector: nodeSelectorTerms: - matchExpressions: - key: app operator: In values: - test1
Optional: To edit the local device path of the disks, click YAML view and specify the device path in the
deviceSelector
section:spec: storage: deviceClasses: - name: vg1 deviceSelector: paths: - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10
Verification Steps
-
Click Storage
Storage Classes from the left pane of the OpenShift Container Platform Web Console. -
Verify that the
lvms-<device-class-name>
storage class is created with theLVMCluster
creation. By default,vg1
is thedevice-class-name
.
Additional resources
4.12.3.4. Provisioning storage using LVM Storage
You can provision persistent volume claims (PVCs) using the storage class that is created during the Operator installation. You can provision block and file PVCs, however, the storage is allocated only when a pod that uses the PVC is created.
LVM Storage provisions PVCs in units of 1 GiB. The requested storage is rounded up to the nearest GiB.
Procedure
Identify the
StorageClass
that is created when LVM Storage is deployed.The
StorageClass
name is in the format,lvms-<device-class-name>
. Thedevice-class-name
is the name of the device class that you provided in theLVMCluster
of thePolicy
YAML. For example, if thedeviceClass
is calledvg1
, then thestorageClass
name islvms-vg1
.The
volumeBindingMode
of the storage class is set toWaitForFirstConsumer
.To create a PVC where the application requires storage, save the following YAML to a file with a name such as
pvc.yaml
.Example YAML to create a PVC
# block pvc apiVersion: v1 kind: PersistentVolumeClaim metadata: name: lvm-block-1 namespace: default spec: accessModes: - ReadWriteOnce volumeMode: Block resources: requests: storage: 10Gi storageClassName: lvms-vg1 --- # file pvc apiVersion: v1 kind: PersistentVolumeClaim metadata: name: lvm-file-1 namespace: default spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Gi storageClassName: lvms-vg1
Create the PVC by running the following command:
# oc create -f pvc.yaml -ns <application_namespace>
The created PVCs remain in
pending
state until you deploy the pods that use them.
4.12.3.5. Scaling storage of single-node OpenShift clusters
The OpenShift Container Platform supports additional worker nodes for single-node OpenShift clusters on bare-metal user-provisioned infrastructure. LVM Storage detects and uses the new additional worker nodes when the nodes show up.
Additional resources
4.12.3.5.1. Scaling up storage by adding capacity to your single-node OpenShift cluster
To scale the storage capacity of your configured worker nodes on a single-node OpenShift cluster, you can increase the capacity by adding disks.
Prerequisites
- You have additional unused disks on each single-node OpenShift cluster to be used by LVM Storage.
Procedure
- Log in to OpenShift Container Platform console of the single-node OpenShift cluster.
-
From the Operators
Installed Operators page, click on the LVM Storage Operator in the openshift-storage
namespace. -
Click on the LVMCluster tab to list the
LVMCluster
CR created on the cluster. - Select Edit LVMCluster from the Actions drop-down menu.
- Click on the YAML tab.
Edit the
LVMCluster
CR YAML to add the new device path in thedeviceSelector
section:NoteIn case the
deviceSelector
field is not included during theLVMCluster
creation, it is not possible to add thedeviceSelector
section to the CR. You need to remove theLVMCluster
and then create a new CR.apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: - name: vg1 deviceSelector: paths: - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 2 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10
Additional resources
4.12.3.5.2. Scaling up storage by adding capacity to your single-node OpenShift cluster using RHACM
You can scale the storage capacity of your configured worker nodes on a single-node OpenShift cluster using RHACM.
Prerequisites
-
You have access to the RHACM cluster using an account with
cluster-admin
privilages. - You have additional unused disks on each single-node OpenShift cluster to be used by LVM Storage.
Procedure
- Log in to the RHACM CLI using your OpenShift Container Platform credentials.
- Find the disk that you want to add. The disk to be added needs to match with the device name and path of the existing disks.
To add capacity to the single-node OpenShift cluster, edit the
deviceSelector
section of the existing policy YAML, for example,policy-lvms-operator.yaml
.NoteIn case the
deviceSelector
field is not included during theLVMCluster
creation, it is not possible to add thedeviceSelector
section to the CR. You need to remove theLVMCluster
and then recreate from the new CR.apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-install-lvms spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-install-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-install-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: install-lvms spec: disabled: false remediationAction: enforce policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-lvms spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-storage - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace remediationAction: enforce severity: low - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: - name: vg1 deviceSelector: paths: - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 # new disk is added thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10 nodeSelector: nodeSelectorTerms: - matchExpressions: - key: app operator: In values: - test1 remediationAction: enforce severity: low
Edit the policy by running the following command:
# oc edit -f policy-lvms-operator.yaml -ns lvms-policy-ns 1
- 1
- The
policy-lvms-operator.yaml
is the name of the existing policy.
This uses the new disk specified in the
LVMCluster
CR to provision storage.
4.12.3.5.3. Expanding PVCs
To leverage the new storage after adding additional capacity, you can expand existing persistent volume claims (PVCs) with LVM Storage.
Prerequisites
- Dynamic provisioning is used.
-
The controlling
StorageClass
object hasallowVolumeExpansion
set totrue
.
Procedure
Modify the
.spec.resources.requests.storage
field in the desired PVC resource to the new size by running the following command:oc patch <pvc_name> -n <application_namespace> -p '{ "spec": { "resources": { "requests": { "storage": "<desired_size>" }}}}'
-
Watch the
status.conditions
field of the PVC to see if the resize has completed. OpenShift Container Platform adds theResizing
condition to the PVC during expansion, which is removed after the expansion completes.
4.12.3.6. Upgrading LVM Storage on single-node OpenShift clusters
Currently, it is not possible to upgrade from OpenShift Data Foundation Logical Volume Manager Operator 4.11 to LVM Storage 4.12 on single-node OpenShift clusters.
The data will not be preserved during this process.
Procedure
- Back up any data that you want to preserve on the persistent volume claims (PVCs).
- Delete all PVCs provisioned by the OpenShift Data Foundation Logical Volume Manager Operator and their pods.
- Reinstall LVM Storage on OpenShift Container Platform 4.12.
- Recreate the workloads.
- Copy the backup data to the PVCs created after upgrading to 4.12.
4.12.3.7. Volume snapshots for single-node OpenShift
You can take volume snapshots of persistent volumes (PVs) that are provisioned by LVM Storage. You can also create volume snapshots of the cloned volumes. Volume snapshots help you to do the following:
Back up your application data.
ImportantVolume snapshots are located on the same devices as the original data. To use the volume snapshots as backups, you need to move the snapshots to a secure location. You can use OpenShift API for Data Protection backup and restore solutions.
- Revert to a state at which the volume snapshot was taken.
Additional resources
4.12.3.7.1. Creating volume snapshots in single-node OpenShift
You can create volume snapshots based on the available capacity of the thin pool and the overprovisioning limits. LVM Storage creates a VolumeSnapshotClass
with the lvms-<deviceclass-name>
name.
Prerequisites
-
You ensured that the persistent volume claim (PVC) is in
Bound
state. This is required for a consistent snapshot. - You stopped all the I/O to the PVC before taking the snapshot.
Procedure
-
Log in to the single-node OpenShift for which you need to run the
oc
command. Save the following YAML to a file with a name such as
lvms-vol-snapshot.yaml
.Example YAML to create a volume snapshot
apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: lvm-block-1-snap spec: volumeSnapshotClassName: lvms-vg1 source: persistentVolumeClaimName: lvm-block-1
Create the snapshot by running the following command in the same namespace as the PVC:
# oc create -f lvms-vol-snapshot.yaml
A read-only copy of the PVC is created as a volume snapshot.
4.12.3.7.2. Restoring volume snapshots in single-node OpenShift
When you restore a volume snapshot, a new persistent volume claim (PVC) is created. The restored PVC is independent of the volume snapshot and the source PVC.
Prerequisites
- The storage class must be the same as that of the source PVC.
The size of the requested PVC must be the same as that of the source volume of the snapshot.
ImportantA snapshot must be restored to a PVC of the same size as the source volume of the snapshot. If a larger PVC is required, you can resize the PVC after the snapshot is restored successfully.
Procedure
- Identify the storage class name of the source PVC and volume snapshot name.
Save the following YAML to a file with a name such as
lvms-vol-restore.yaml
to restore the snapshot.Example YAML to restore a PVC.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: lvm-block-1-restore spec: accessModes: - ReadWriteOnce volumeMode: Block Resources: Requests: storage: 2Gi storageClassName: lvms-vg1 dataSource: name: lvm-block-1-snap kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io
Create the policy by running the following command in the same namespace as the snapshot:
# oc create -f lvms-vol-restore.yaml
4.12.3.7.3. Deleting volume snapshots in single-node OpenShift
You can delete volume snapshots resources and persistent volume claims (PVCs).
Procedure
Delete the volume snapshot resource by running the following command:
# oc delete volumesnapshot <volume_snapshot_name> -n <namespace>
NoteWhen you delete a persistent volume claim (PVC), the snapshots of the PVC are not deleted.
To delete the restored volume snapshot, delete the PVC that was created to restore the volume snapshot by running the following command:
# oc delete pvc <pvc_name> -n <namespace>
4.12.3.8. Volume cloning for single-node OpenShift
A clone is a duplicate of an existing storage volume that can be used like any standard volume.
4.12.3.8.1. Creating volume clones in single-node OpenShift
You create a clone of a volume to make a point-in-time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size.
The cloned PVC has write access.
Prerequisites
-
You ensured that the PVC is in
Bound
state. This is required for a consistent snapshot. -
You ensured that the
StorageClass
is the same as that of the source PVC.
Procedure
- Identify the storage class of the source PVC.
To create a volume clone, save the following YAML to a file with a name such as
lvms-vol-clone.yaml
:Example YAML to clone a volume
apiVersion: v1 kind: PersistentVolumeClaim Metadata: name: lvm-block-1-clone Spec: storageClassName: lvms-vg1 dataSource: name: lvm-block-1 kind: PersistentVolumeClaim accessModes: - ReadWriteOnce volumeMode: Block Resources: Requests: storage: 2Gi
Create the policy in the same namespace as the source PVC by running the following command:
# oc create -f lvms-vol-clone.yaml
4.12.3.8.2. Deleting cloned volumes in single-node OpenShift
You can delete cloned volumes.
Procedure
To delete the cloned volume, delete the cloned PVC by running the following command:
# oc delete pvc <clone_pvc_name> -n <namespace>
4.12.3.9. Monitoring LVM Storage
To enable cluster monitoring, you must add the following label in the namespace where you have installed LVM Storage:
openshift.io/cluster-monitoring=true
For information about enabling cluster monitoring in RHACM, see Observability and Adding custom metrics.
4.12.3.9.1. Metrics
You can monitor LVM Storage by viewing the metrics.
The following table describes the topolvm
metrics:
Alert | Description |
---|---|
| Indicates the percentage of data space used in the LVM thinpool. |
| Indicates the percentage of metadata space used in the LVM thinpool. |
| Indicates the size of the LVM thin pool in bytes. |
| Indicates the available space in the LVM volume group in bytes. |
| Indicates the size of the LVM volume group in bytes. |
| Indicates the available over-provisioned size of the LVM thin pool in bytes. |
Metrics are updated every 10 minutes or when there is a change, such as a new logical volume creation, in the thin pool.
4.12.3.9.2. Alerts
When the thin pool and volume group reach maximum storage capacity, further operations fail. This can lead to data loss.
LVM Storage sends the following alerts when the usage of the thin pool and volume group exceeds a certain value:
Alert | Description |
---|---|
| This alert is triggered when both the volume group and thin pool usage exceeds 75% on nodes. Data deletion or volume group expansion is required. |
| This alert is triggered when both the volume group and thin pool usage exceeds 85% on nodes. In this case, the volume group is critically full. Data deletion or volume group expansion is required. |
| This alert is triggered when the thin pool data uusage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required. |
| This alert is triggered when the thin pool data usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required. |
| This alert is triggered when the thin pool metadata usage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required. |
| This alert is triggered when the thin pool metadata usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required. |
4.12.3.10. Downloading log files and diagnostic information using must-gather
When LVM Storage is unable to automatically resolve a problem, use the must-gather tool to collect the log files and diagnostic information so that you or the Red Hat Support can review the problem and determine a solution.
Procedure
Run the
must-gather
command from the client connected to the LVM Storage cluster:$ oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v4.12 --dest-dir=<directory_name>
Additional resources
4.12.3.11. LVM Storage reference YAML file
The sample LVMCluster
custom resource (CR) describes all the fields in the YAML file.
Example LVMCluster CR
apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: tolerations: - effect: NoSchedule key: xyz operator: Equal value: "true" storage: deviceClasses: 1 - name: vg1 2 nodeSelector: 3 nodeSelectorTerms: 4 - matchExpressions: - key: mykey operator: In values: - ssd deviceSelector: 5 paths: - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 thinPoolConfig: 6 name: thin-pool-1 7 sizePercent: 90 8 overprovisionRatio: 10 9 status: deviceClassStatuses: 10 - name: vg1 nodeStatus: 11 - devices: 12 - /dev/nvme0n1 - /dev/nvme1n1 - /dev/nvme2n1 node: my-node.example.com 13 status: Ready 14 ready: true 15 state: Ready 16
- 1
- The LVM volume groups to be created on the cluster. You can create only a single
deviceClass
. - 2
- The name of the LVM volume group to be created on the nodes.
- 3
- The nodes on which to create the LVM volume group. If the field is empty, all nodes are considered.
- 4
- A list of node selector requirements.
- 5
- A list of device paths which is used to create the LVM volume group. If this field is empty, all unused disks on the node are used. After a device is added to the LVM volume volume group, it cannot be removed.
- 6
- The LVM thin pool configuration.
- 7
- The name of the thin pool to be created in the LVM volume group.
- 8
- The percentage of remaining space in the LVM volume group that should be used for creating the thin pool.
- 9
- The factor by which additional storage can be provisioned compared to the available storage in the thin pool.
- 10
- The status of the
deviceClass
. - 11
- The status of the LVM volume group on each node.
- 12
- The list of devices used to create the LVM volume group.
- 13
- The node on which the
deviceClass
was created. - 14
- The status of the LVM volume group on the node.
- 15
- This field is deprecated.
- 16
- The status of the
LVMCluster
.
4.12.4. Troubleshooting local persistent storage using LVMS
Because OpenShift Container Platform does not scope a persistent volume (PV) to a single project, it can be shared across the cluster and claimed by any project using a persistent volume claim (PVC). This can lead to a number of issues that require troubleshooting.
4.12.4.1. Investigating a PVC stuck in the Pending state
A persistent volume claim (PVC) can get stuck in a Pending
state for a number of reasons. For example:
- Insufficient computing resources
- Network problems
- Mismatched storage class or node selector
- No available volumes
-
The node with the persistent volume (PV) is in a
Not Ready
state
Identify the cause by using the oc describe
command to review details about the stuck PVC.
Procedure
Retrieve the list of PVCs by running the following command:
$ oc get pvc
Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvms-test Pending lvms-vg1 11s
Inspect the events associated with a PVC stuck in the
Pending
state by running the following command:$ oc describe pvc <pvc_name> 1
- 1
- Replace
<pvc_name>
with the name of the PVC. For example,lvms-vg1
.
Example output
Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 4s (x2 over 17s) persistentvolume-controller storageclass.storage.k8s.io "lvms-vg1" not found
4.12.4.2. Recovering from missing LVMS or Operator components
If you encounter a storage class "not found" error, check the LVMCluster
resource and ensure that all the logical volume manager storage (LVMS) pods are running. You can create an LVMCluster
resource if it does not exist.
Procedure
Verify the presence of the LVMCluster resource by running the following command:
$ oc get lvmcluster -n openshift-storage
Example output
NAME AGE my-lvmcluster 65m
If the cluster doesn’t have an
LVMCluster
resource, create one by running the following command:$ oc create -n openshift-storage -f <custom_resource> 1
- 1
- Replace
<custom_resource>
with a custom resource URL or file tailored to your requirements.
Example custom resource
apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: - name: vg1 default: true thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10
Check that all the pods from LVMS are in the
Running
state in theopenshift-storage
namespace by running the following command:$ oc get pods -n openshift-storage
Example output
NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m vg-manager-r6zdv 1/1 Running 0 66m
The expected output is one running instance of
lvms-operator
andvg-manager
. One instance oftopolvm-controller
andtopolvm-node
is expected for each node.If
topolvm-node
is stuck in theInit
state, there is a failure to locate an available disk for LVMS to use. To retrieve the information necessary to troubleshoot, review the logs of thevg-manager
pod by running the following command:$ oc logs -l app.kubernetes.io/component=vg-manager -n openshift-storage
4.12.4.3. Recovering from node failure
Sometimes a persistent volume claim (PVC) is stuck in a Pending
state because a particular node in the cluster has failed. To identify the failed node, you can examine the restart count of the topolvm-node
pod. An increased restart count indicates potential problems with the underlying node, which may require further investigation and troubleshooting.
Procedure
Examine the restart count of the
topolvm-node
pod instances by running the following command:$ oc get pods -n openshift-storage
Example output
NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m topolvm-node-54as8 4/4 Running 0 66m topolvm-node-78fft 4/4 Running 17 (8s ago) 66m vg-manager-r6zdv 1/1 Running 0 66m vg-manager-990ut 1/1 Running 0 66m vg-manager-an118 1/1 Running 0 66m
After you resolve any issues with the node, you might need to perform the forced cleanup procedure if the PVC is still stuck in a
Pending
state.
Additional resources
4.12.4.4. Recovering from disk failure
If you see a failure message while inspecting the events associated with the persistent volume claim (PVC), there might be a problem with the underlying volume or disk. Disk and volume provisioning issues often result with a generic error first, such as Failed to provision volume with StorageClass <storage_class_name>
. A second, more specific error message usually follows.
Procedure
Inspect the events associated with a PVC by running the following command:
$ oc describe pvc <pvc_name> 1
- 1
- Replace
<pvc_name>
with the name of the PVC. Here are some examples of disk or volume failure error messages and their causes:- Failed to check volume existence: Indicates a problem in verifying whether the volume already exists. Volume verification failure can be caused by network connectivity problems or other failures.
- Failed to bind volume: Failure to bind a volume can happen if the persistent volume (PV) that is available does not match the requirements of the PVC.
- FailedMount or FailedUnMount: This error indicates problems when trying to mount the volume to a node or unmount a volume from a node. If the disk has failed, this error might appear when a pod tries to use the PVC.
-
Volume is already exclusively attached to one node and can’t be attached to another: This error can appear with storage solutions that do not support
ReadWriteMany
access modes.
- Establish a direct connection to the host where the problem is occurring.
- Resolve the disk issue.
After you have resolved the issue with the disk, you might need to perform the forced cleanup procedure if failure messages persist or reoccur.
Additional resources
4.12.4.5. Performing a forced cleanup
If disk- or node-related problems persist after you complete the troubleshooting procedures, it might be necessary to perform a forced cleanup procedure. A forced cleanup is used to comprehensively address persistent issues and ensure the proper functioning of the LVMS.
Prerequisites
- All of the persistent volume claims (PVCs) created using the logical volume manager storage (LVMS) driver have been removed.
- The pods using those PVCs have been stopped.
Procedure
Switch to the
openshift-storage
namespace by running the following command:$ oc project openshift-storage
Ensure there is no
Logical Volume
custom resource (CR) remaining by running the following command:$ oc get logicalvolume
Example output
No resources found
If there are any
LogicalVolume
CRs remaining, remove their finalizers by running the following command:$ oc patch logicalvolume <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 1
- 1
- Replace
<name>
with the name of the CR.
After removing their finalizers, delete the CRs by running the following command:
$ oc delete logicalvolume <name> 1
- 1
- Replace
<name>
with the name of the CR.
Make sure there are no
LVMVolumeGroup
CRs left by running the following command:$ oc get lvmvolumegroup
Example output
No resources found
If there are any
LVMVolumeGroup
CRs left, remove their finalizers by running the following command:$ oc patch lvmvolumegroup <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 1
- 1
- Replace
<name>
with the name of the CR.
After removing their finalizers, delete the CRs by running the following command:
$ oc delete lvmvolumegroup <name> 1
- 1
- Replace
<name>
with the name of the CR.
Remove any
LVMVolumeGroupNodeStatus
CRs by running the following command:$ oc delete lvmvolumegroupnodestatus --all
Remove the
LVMCluster
CR by running the following command:$ oc delete lvmcluster --all