Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 4. Configuring persistent storage
4.1. Persistent storage using AWS Elastic Block Store Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform supports Amazon Elastic Block Store (EBS) volumes. You can provision your OpenShift Container Platform cluster with persistent storage by using Amazon EC2.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can dynamically provision Amazon EBS volumes. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. You can define a KMS key to encrypt container-persistent volumes on AWS. By default, newly created clusters using OpenShift Container Platform version 4.10 and later use gp3 storage and the AWS EBS CSI driver.
High-availability of storage in the infrastructure is left to the underlying storage provider.
OpenShift Container Platform 4.12 and later provides automatic migration for the AWS Block in-tree volume plugin to its equivalent CSI driver.
CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
4.1.1. Creating the EBS storage class Link kopierenLink in die Zwischenablage kopiert!
Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.
4.1.2. Creating the persistent volume claim Link kopierenLink in die Zwischenablage kopiert!
Prerequisites
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Procedure
-
In the OpenShift Container Platform console, click Storage
Persistent Volume Claims. - In the persistent volume claims overview, click Create Persistent Volume Claim.
Define the desired options on the page that appears.
- Select the previously-created storage class from the drop-down menu.
- Enter a unique name for the storage claim.
- Select the access mode. This selection determines the read and write access for the storage claim.
- Define the size of the storage claim.
- Click Create to create the persistent volume claim and generate a persistent volume.
4.1.3. Volume format Link kopierenLink in die Zwischenablage kopiert!
Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the
fsType
This verification enables you to use unformatted AWS volumes as persistent volumes, because OpenShift Container Platform formats them before the first use.
4.1.4. Maximum number of EBS volumes on a node Link kopierenLink in die Zwischenablage kopiert!
By default, OpenShift Container Platform supports a maximum of 39 EBS volumes attached to one node. This limit is consistent with the AWS volume limits. The volume limit depends on the instance type.
As a cluster administrator, you must use either in-tree or Container Storage Interface (CSI) volumes and their respective storage classes, but never both volume types at the same time. The maximum attached EBS volume number is counted separately for in-tree and CSI volumes, which means you could have up to 39 EBS volumes of each type.
For information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins, see AWS Elastic Block Store CSI Driver Operator.
4.1.5. Encrypting container persistent volumes on AWS with a KMS key Link kopierenLink in die Zwischenablage kopiert!
Defining a KMS key to encrypt container-persistent volumes on AWS is useful when you have explicit compliance and security guidelines when deploying to AWS.
Prerequisites
- Underlying infrastructure must contain storage.
- You must create a customer KMS key on AWS.
Procedure
Create a storage class:
$ cat << EOF | oc create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name>1 parameters: fsType: ext42 encrypted: "true" kmsKeyId: keyvalue3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer EOF- 1
- Specifies the name of the storage class.
- 2
- File system that is created on provisioned volumes.
- 3
- Specifies the full Amazon Resource Name (ARN) of the key to use when encrypting the container-persistent volume. If you do not provide any key, but the
encryptedfield is set totrue, then the default KMS key is used. See Finding the key ID and key ARN on AWS in the AWS documentation.
Create a persistent volume claim (PVC) with the storage class specifying the KMS key:
$ cat << EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: <storage-class-name> resources: requests: storage: 1Gi EOFCreate workload containers to consume the PVC:
$ cat << EOF | oc create -f - kind: Pod metadata: name: mypod spec: containers: - name: httpd image: quay.io/centos7/httpd-24-centos7 ports: - containerPort: 80 volumeMounts: - mountPath: /mnt/storage name: data volumes: - name: data persistentVolumeClaim: claimName: mypvc EOF
4.2. Persistent storage using Azure Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform supports Microsoft Azure Disk volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Azure Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.
OpenShift Container Platform 4.11 and later provides automatic migration for the Azure Disk in-tree volume plugin to its equivalent CSI driver.
CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
High availability of storage in the infrastructure is left to the underlying storage provider.
4.2.1. Creating the Azure storage class Link kopierenLink in die Zwischenablage kopiert!
Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.
Procedure
-
In the OpenShift Container Platform console, click Storage
Storage Classes. - In the storage class overview, click Create Storage Class.
Define the desired options on the page that appears.
- Enter a name to reference the storage class.
- Enter an optional description.
- Select the reclaim policy.
Select
from the drop down list.kubernetes.io/azure-disk-
Enter the storage account type. This corresponds to your Azure storage account SKU tier. Valid options are ,
Premium_LRS,Standard_LRS, andStandardSSD_LRS.UltraSSD_LRS Enter the kind of account. Valid options are
,sharedanddedicated,.managedImportantRed Hat only supports the use of
in the storage class.kind: ManagedWith
andShared, Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created withDedicatedorSharedcannot be attached to OpenShift Container Platform nodes.Dedicated
-
Enter the storage account type. This corresponds to your Azure storage account SKU tier. Valid options are
- Enter additional parameters for the storage class as desired.
- Click Create to create the storage class.
4.2.2. Creating the persistent volume claim Link kopierenLink in die Zwischenablage kopiert!
Prerequisites
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Procedure
-
In the OpenShift Container Platform console, click Storage
Persistent Volume Claims. - In the persistent volume claims overview, click Create Persistent Volume Claim.
Define the desired options on the page that appears.
- Select the previously-created storage class from the drop-down menu.
- Enter a unique name for the storage claim.
- Select the access mode. This selection determines the read and write access for the storage claim.
- Define the size of the storage claim.
- Click Create to create the persistent volume claim and generate a persistent volume.
4.2.3. Volume format Link kopierenLink in die Zwischenablage kopiert!
Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the
fsType
This allows using unformatted Azure volumes as persistent volumes, because OpenShift Container Platform formats them before the first use.
4.2.4. Machine sets that deploy machines with ultra disks using PVCs Link kopierenLink in die Zwischenablage kopiert!
You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads.
Both the in-tree plugin and CSI driver support using PVCs to enable ultra disks. You can also deploy machines with ultra disks as data disks without creating a PVC.
4.2.4.1. Creating machines with ultra disks by using machine sets Link kopierenLink in die Zwischenablage kopiert!
You can deploy machines with ultra disks on Azure by editing your machine set YAML file.
Prerequisites
- Have an existing Microsoft Azure cluster.
Procedure
Copy an existing Azure
custom resource (CR) and edit it by running the following command:MachineSet$ oc edit machineset <machine_set_name>where
is the machine set that you want to provision machines with ultra disks.<machine_set_name>Add the following lines in the positions indicated:
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd1 providerSpec: value: ultraSSDCapability: Enabled2 Create a machine set using the updated configuration by running the following command:
$ oc create -f <machine_set_name>.yamlCreate a storage class that contains the following YAML definition:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc1 parameters: cachingMode: None diskIopsReadWrite: "2000"2 diskMbpsReadWrite: "320"3 kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com4 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer5 - 1
- Specify the name of the storage class. This procedure uses
ultra-disk-scfor this value. - 2
- Specify the number of IOPS for the storage class.
- 3
- Specify the throughput in MBps for the storage class.
- 4
- For Azure Kubernetes Service (AKS) version 1.21 or later, use
disk.csi.azure.com. For earlier versions of AKS, usekubernetes.io/azure-disk. - 5
- Optional: Specify this parameter to wait for the creation of the pod that will use the disk.
Create a persistent volume claim (PVC) to reference the
storage class that contains the following YAML definition:ultra-disk-scapiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk1 spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc2 resources: requests: storage: 4Gi3 Create a pod that contains the following YAML definition:
apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd1 containers: - name: nginx-ultra image: alpine:latest command: - "sleep" - "infinity" volumeMounts: - mountPath: "/mnt/azure" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk2
Verification
Validate that the machines are created by running the following command:
$ oc get machinesThe machines should be in the
state.RunningFor a machine that is running and has a node attached, validate the partition by running the following command:
$ oc debug node/<node_name> -- chroot /host lsblkIn this command,
starts a debugging shell on the nodeoc debug node/<node_name>and passes a command with<node_name>. The passed command--provides access to the underlying host OS binaries, andchroot /hostshows the block devices that are attached to the host OS machine.lsblk
Next steps
To use an ultra disk from within a pod, create a workload that uses the mount point. Create a YAML file similar to the following example:
apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - name: lun0p1 mountPath: "/tmp" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd
4.2.4.2. Troubleshooting resources for machine sets that enable ultra disks Link kopierenLink in die Zwischenablage kopiert!
Use the information in this section to understand and recover from issues you might encounter.
4.2.4.2.1. Unable to mount a persistent volume claim backed by an ultra disk Link kopierenLink in die Zwischenablage kopiert!
If there is an issue mounting a persistent volume claim backed by an ultra disk, the pod becomes stuck in the
ContainerCreating
For example, if the
additionalCapabilities.ultraSSDEnabled
StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.
To resolve this issue, describe the pod by running the following command:
$ oc -n <stuck_pod_namespace> describe pod <stuck_pod_name>
4.3. Persistent storage using Azure File Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform supports Microsoft Azure File volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can provision Azure File volumes dynamically.
Persistent volumes are not bound to a single project or namespace, and you can share them across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace, and can be requested by users for use in applications.
High availability of storage in the infrastructure is left to the underlying storage provider.
Azure File volumes use Server Message Block.
OpenShift Container Platform 4.13 and later provides automatic migration for the Azure File in-tree volume plugin to its equivalent CSI driver.
CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
Additional resources
4.3.1. Create the Azure File share persistent volume claim Link kopierenLink in die Zwischenablage kopiert!
To create the persistent volume claim, you must first define a
Secret
PersistentVolume
Prerequisites
- An Azure File share exists.
- The credentials to access this share, specifically the storage account and key, are available.
Procedure
Create a
object that contains the Azure File credentials:Secret$ oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \1 --from-literal=azurestorageaccountkey=<storage-account-key>2 Create a
object that references thePersistentVolumeobject you created:SecretapiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001"1 spec: capacity: storage: "5Gi"2 accessModes: - "ReadWriteOnce" storageClassName: azure-file-sc azureFile: secretName: <secret-name>3 shareName: share-14 readOnly: falseCreate a
object that maps to the persistent volume you created:PersistentVolumeClaimapiVersion: "v1" kind: "PersistentVolumeClaim" metadata: name: "claim1"1 spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "5Gi"2 storageClassName: azure-file-sc3 volumeName: "pv0001"4 - 1
- The name of the persistent volume claim.
- 2
- The size of this persistent volume claim.
- 3
- The name of the storage class that is used to provision the persistent volume. Specify the storage class used in the
PersistentVolumedefinition. - 4
- The name of the existing
PersistentVolumeobject that references the Azure File share.
4.3.2. Mount the Azure File share in a pod Link kopierenLink in die Zwischenablage kopiert!
After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod.
Prerequisites
- A persistent volume claim exists that is mapped to the underlying Azure File share.
Procedure
Create a pod that mounts the existing persistent volume claim:
apiVersion: v1 kind: Pod metadata: name: pod-name1 spec: containers: ... volumeMounts: - mountPath: "/data"2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim13 - 1
- The name of the pod.
- 2
- The path to mount the Azure File share inside the pod. Do not mount to the container root,
/, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host/dev/ptsfiles. It is safe to mount the host by using/host. - 3
- The name of the
PersistentVolumeClaimobject that has been previously created.
4.4. Persistent storage using Cinder Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform supports OpenStack Cinder. Some familiarity with Kubernetes and OpenStack is assumed.
Cinder volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.
OpenShift Container Platform 4.11 and later provides automatic migration for the Cinder in-tree volume plugin to its equivalent CSI driver.
CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
4.4.1. Manual provisioning with Cinder Link kopierenLink in die Zwischenablage kopiert!
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Prerequisites
- OpenShift Container Platform configured for Red Hat OpenStack Platform (RHOSP)
- Cinder volume ID
4.4.1.1. Creating the persistent volume Link kopierenLink in die Zwischenablage kopiert!
You must define your persistent volume (PV) in an object definition before creating it in OpenShift Container Platform:
Procedure
Save your object definition to a file.
cinder-persistentvolume.yaml
apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001"1 spec: capacity: storage: "5Gi"2 accessModes: - "ReadWriteOnce" cinder:3 fsType: "ext3"4 volumeID: "f37a03aa-6212-4c62-a805-9ce139fab180"5 - 1
- The name of the volume that is used by persistent volume claims or pods.
- 2
- The amount of storage allocated to this volume.
- 3
- Indicates
cinderfor Red Hat OpenStack Platform (RHOSP) Cinder volumes. - 4
- The file system that is created when the volume is mounted for the first time.
- 5
- The Cinder volume to use.
ImportantDo not change the
parameter value after the volume is formatted and provisioned. Changing this value can result in data loss and pod failure.fstypeCreate the object definition file you saved in the previous step.
$ oc create -f cinder-persistentvolume.yaml
4.4.1.2. Persistent volume formatting Link kopierenLink in die Zwischenablage kopiert!
You can use unformatted Cinder volumes as PVs because OpenShift Container Platform formats them before the first use.
Before OpenShift Container Platform mounts the volume and passes it to a container, the system checks that it contains a file system as specified by the
fsType
4.4.1.3. Cinder volume security Link kopierenLink in die Zwischenablage kopiert!
If you use Cinder PVs in your application, configure security for their deployment configurations.
Prerequisites
-
An SCC must be created that uses the appropriate strategy.
fsGroup
Procedure
Create a service account and add it to the SCC:
$ oc create serviceaccount <service_account>$ oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project>In your application’s deployment configuration, provide the service account name and
:securityContextapiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 11 selector:2 name: frontend template:3 metadata: labels:4 name: frontend5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account>6 securityContext: fsGroup: 77777 - 1
- The number of copies of the pod to run.
- 2
- The label selector of the pod to run.
- 3
- A template for the pod that the controller creates.
- 4
- The labels on the pod. They must include labels from the label selector.
- 5
- The maximum name length after expanding any parameters is 63 characters.
- 6
- Specifies the service account you created.
- 7
- Specifies an
fsGroupfor the pods.
4.5. Persistent storage using Fibre Channel Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform supports Fibre Channel, allowing you to provision your OpenShift Container Platform cluster with persistent storage using Fibre channel volumes. Some familiarity with Kubernetes and Fibre Channel is assumed.
Persistent storage using Fibre Channel is not supported on ARM architecture based infrastructures.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.
High availability of storage in the infrastructure is left to the underlying storage provider.
4.5.1. Provisioning Link kopierenLink in die Zwischenablage kopiert!
To provision Fibre Channel volumes using the
PersistentVolume
-
The (array of Fibre Channel target’s World Wide Names).
targetWWNs - A valid LUN number.
- The filesystem type.
A persistent volume and a LUN have a one-to-one mapping between them.
Prerequisites
- Fibre Channel LUNs must exist in the underlying infrastructure.
PersistentVolume object definition
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
fc:
wwids: [3600508b400105e210000900000490000]
targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5']
lun: 2
fsType: ext4
- 1
- World wide identifiers (WWIDs). Either FC
wwidsor a combination of FCtargetWWNsandlunmust be set, but not both simultaneously. The FC WWID identifier is recommended over the WWNs target because it is guaranteed to be unique for every storage device, and independent of the path that is used to access the device. The WWID identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital Product Data (page 0x83) or Unit Serial Number (page 0x80). FC WWIDs are identified as/dev/disk/by-id/to reference the data on the disk, even if the path to the device changes and even when accessing the device from different systems. - 2 3
- Fibre Channel WWNs are identified as
/dev/disk/by-path/pci-<IDENTIFIER>-fc-0x<WWN>-lun-<LUN#>, but you do not need to provide any part of the path leading up to theWWN, including the0x, and anything after, including the-(hyphen).
Changing the value of the
fstype
4.5.1.1. Enforcing disk quotas Link kopierenLink in die Zwischenablage kopiert!
Use LUN partitions to enforce disk quotas and size constraints. Each LUN is mapped to a single persistent volume, and unique names must be used for persistent volumes.
Enforcing quotas in this way allows the end user to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity.
4.5.1.2. Fibre Channel volume security Link kopierenLink in die Zwischenablage kopiert!
Users request storage with a persistent volume claim. This claim only lives in the user’s namespace, and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume across a namespace causes the pod to fail.
Each Fibre Channel LUN must be accessible by all nodes in the cluster.
4.6. Persistent storage using FlexVolume Link kopierenLink in die Zwischenablage kopiert!
FlexVolume is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
Out-of-tree Container Storage Interface (CSI) driver is the recommended way to write volume drivers in OpenShift Container Platform. Maintainers of FlexVolume drivers should implement a CSI driver and move users of FlexVolume to CSI. Users of FlexVolume should move their workloads to CSI driver.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
OpenShift Container Platform supports FlexVolume, an out-of-tree plugin that uses an executable model to interface with drivers.
To use storage from a back-end that does not have a built-in plugin, you can extend OpenShift Container Platform through FlexVolume drivers and provide persistent storage to applications.
Pods interact with FlexVolume drivers through the
flexvolume
4.6.1. About FlexVolume drivers Link kopierenLink in die Zwischenablage kopiert!
A FlexVolume driver is an executable file that resides in a well-defined directory on all nodes in the cluster. OpenShift Container Platform calls the FlexVolume driver whenever it needs to mount or unmount a volume represented by a
PersistentVolume
flexVolume
Attach and detach operations are not supported in OpenShift Container Platform for FlexVolume.
4.6.2. FlexVolume driver example Link kopierenLink in die Zwischenablage kopiert!
The first command-line argument of the FlexVolume driver is always an operation name. Other parameters are specific to each operation. Most of the operations take a JavaScript Object Notation (JSON) string as a parameter. This parameter is a complete JSON string, and not the name of a file with the JSON data.
The FlexVolume driver contains:
-
All .
flexVolume.options -
Some options from prefixed by
flexVolume, such askubernetes.io/andfsType.readwrite -
The content of the referenced secret, if specified, prefixed by .
kubernetes.io/secret/
FlexVolume driver JSON input example
{
"fooServer": "192.168.0.1:1234",
"fooVolumeName": "bar",
"kubernetes.io/fsType": "ext4",
"kubernetes.io/readwrite": "ro",
"kubernetes.io/secret/<key name>": "<key value>",
"kubernetes.io/secret/<another key name>": "<another key value>",
}
OpenShift Container Platform expects JSON data on standard output of the driver. When not specified, the output describes the result of the operation.
FlexVolume driver default output example
{
"status": "<Success/Failure/Not supported>",
"message": "<Reason for success/failure>"
}
Exit code of the driver should be
0
1
Operations should be idempotent, which means that the mounting of an already mounted volume should result in a successful operation.
4.6.3. Installing FlexVolume drivers Link kopierenLink in die Zwischenablage kopiert!
FlexVolume drivers that are used to extend OpenShift Container Platform are executed only on the node. To implement FlexVolumes, a list of operations to call and the installation path are all that is required.
Prerequisites
FlexVolume drivers must implement these operations:
initInitializes the driver. It is called during initialization of all nodes.
- Arguments: none
- Executed on: node
- Expected output: default JSON
mountMounts a volume to directory. This can include anything that is necessary to mount the volume, including finding the device and then mounting the device.
-
Arguments:
<mount-dir><json> - Executed on: node
- Expected output: default JSON
-
Arguments:
unmountUnmounts a volume from a directory. This can include anything that is necessary to clean up the volume after unmounting.
-
Arguments:
<mount-dir> - Executed on: node
- Expected output: default JSON
-
Arguments:
mountdevice- Mounts a volume’s device to a directory where individual pods can then bind mount.
This call-out does not pass "secrets" specified in the FlexVolume spec. If your driver requires secrets, do not implement this call-out.
-
Arguments:
<mount-dir><json> - Executed on: node
Expected output: default JSON
unmountdevice- Unmounts a volume’s device from a directory.
-
Arguments:
<mount-dir> - Executed on: node
Expected output: default JSON
-
All other operations should return JSON with and exit code
{"status": "Not supported"}.1
-
All other operations should return JSON with
Procedure
To install the FlexVolume driver:
- Ensure that the executable file exists on all nodes in the cluster.
-
Place the executable file at the volume plugin path: .
/etc/kubernetes/kubelet-plugins/volume/exec/<vendor>~<driver>/<driver>
For example, to install the FlexVolume driver for the storage
foo
/etc/kubernetes/kubelet-plugins/volume/exec/openshift.com~foo/foo
4.6.4. Consuming storage using FlexVolume drivers Link kopierenLink in die Zwischenablage kopiert!
Each
PersistentVolume
Procedure
-
Use the object to reference the installed storage.
PersistentVolume
Persistent volume object definition using FlexVolume drivers example
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
flexVolume:
driver: openshift.com/foo
fsType: "ext4"
secretRef: foo-secret
readOnly: true
options:
fooServer: 192.168.0.1:1234
fooVolumeName: bar
- 1
- The name of the volume. This is how it is identified through persistent volume claims or from pods. This name can be different from the name of the volume on back-end storage.
- 2
- The amount of storage allocated to this volume.
- 3
- The name of the driver. This field is mandatory.
- 4
- The file system that is present on the volume. This field is optional.
- 5
- The reference to a secret. Keys and values from this secret are provided to the FlexVolume driver on invocation. This field is optional.
- 6
- The read-only flag. This field is optional.
- 7
- The additional options for the FlexVolume driver. In addition to the flags specified by the user in the
optionsfield, the following flags are also passed to the executable:"fsType":"<FS type>", "readwrite":"<rw>", "secret/key1":"<secret1>" ... "secret/keyN":"<secretN>"
Secrets are passed only to mount or unmount call-outs.
4.7. Persistent storage using GCE Persistent Disk Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform supports GCE Persistent Disk volumes (gcePD). You can provision your OpenShift Container Platform cluster with persistent storage using GCE. Some familiarity with Kubernetes and GCE is assumed.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure.
GCE Persistent Disk volumes can be provisioned dynamically.
Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.
OpenShift Container Platform 4.12 and later provides automatic migration for the GCE Persist Disk in-tree volume plugin to its equivalent CSI driver.
CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes.
For more information about migration, see CSI automatic migration.
High availability of storage in the infrastructure is left to the underlying storage provider.
4.7.1. Creating the GCE storage class Link kopierenLink in die Zwischenablage kopiert!
Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.
4.7.2. Creating the persistent volume claim Link kopierenLink in die Zwischenablage kopiert!
Prerequisites
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Procedure
-
In the OpenShift Container Platform console, click Storage
Persistent Volume Claims. - In the persistent volume claims overview, click Create Persistent Volume Claim.
Define the desired options on the page that appears.
- Select the previously-created storage class from the drop-down menu.
- Enter a unique name for the storage claim.
- Select the access mode. This selection determines the read and write access for the storage claim.
- Define the size of the storage claim.
- Click Create to create the persistent volume claim and generate a persistent volume.
4.7.3. Volume format Link kopierenLink in die Zwischenablage kopiert!
Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the
fsType
This verification enables you to use unformatted GCE volumes as persistent volumes, because OpenShift Container Platform formats them before the first use.
4.8. Persistent storage using iSCSI Link kopierenLink in die Zwischenablage kopiert!
You can provision your OpenShift Container Platform cluster with persistent storage using iSCSI. Some familiarity with Kubernetes and iSCSI is assumed.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure.
High-availability of storage in the infrastructure is left to the underlying storage provider.
When you use iSCSI on Amazon Web Services, you must update the default security policy to include TCP traffic between nodes on the iSCSI ports. By default, they are ports
860
3260
Users must ensure that the iSCSI initiator is already configured on all OpenShift Container Platform nodes by installing the
iscsi-initiator-utils
/etc/iscsi/initiatorname.iscsi
iscsi-initiator-utils
For more information, see Managing Storage Devices.
4.8.1. Provisioning Link kopierenLink in die Zwischenablage kopiert!
You can verify that the storage exists in the underlying infrastructure before mounting it as a volume in OpenShift Container Platform. All that is required for the iSCSI is the iSCSI target portal, a valid iSCSI Qualified Name (IQN), a valid LUN number, the filesystem type, and the
PersistentVolume
Procedure
-
Verify that the storage exists in the underlying infrastructure before mounting it as a volume in OpenShift Container Platform by creating the following .object definition:
PersistentVolume
PersistentVolume object definition
apiVersion: v1
kind: PersistentVolume
metadata:
name: iscsi-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
iscsi:
targetPortal: 10.16.154.81:3260
iqn: iqn.2014-12.example.server:storage.target00
lun: 0
fsType: 'ext4'
4.8.2. Enforce disk quotas Link kopierenLink in die Zwischenablage kopiert!
Use LUN partitions to enforce disk quotas and size constraints. Each LUN is one persistent volume. Kubernetes enforces unique names for persistent volumes.
Enforcing quotas in this way allows the user to request persistent storage by a specific amount (for example,
10Gi
4.8.3. iSCSI volume security Link kopierenLink in die Zwischenablage kopiert!
Users request storage with a
PersistentVolumeClaim
Each iSCSI LUN must be accessible by all nodes in the cluster.
4.8.3.1. Challenge Handshake Authentication Protocol (CHAP) configuration Link kopierenLink in die Zwischenablage kopiert!
Optionally, OpenShift Container Platform can use CHAP to authenticate itself to iSCSI targets:
apiVersion: v1
kind: PersistentVolume
metadata:
name: iscsi-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
iscsi:
targetPortal: 10.0.0.1:3260
iqn: iqn.2016-04.test.com:storage.target00
lun: 0
fsType: ext4
chapAuthDiscovery: true
chapAuthSession: true
secretRef:
name: chap-secret
4.8.4. iSCSI multipathing Link kopierenLink in die Zwischenablage kopiert!
For iSCSI-based storage, you can configure multiple paths by using the same IQN for more than one target portal IP address. Multipathing ensures access to the persistent volume when one or more of the components in a path fail.
Procedure
-
To specify multi-paths in the pod specification, specify a value in the field of the
portalsdefinition object.PersistentVolume
Example PersistentVolume object with a value specified in the portals field.
apiVersion: v1
kind: PersistentVolume
metadata:
name: iscsi-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
iscsi:
targetPortal: 10.0.0.1:3260
portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260']
iqn: iqn.2016-04.test.com:storage.target00
lun: 0
fsType: ext4
readOnly: false
- 1
- Add additional target portals using the
portalsfield.
4.8.5. iSCSI custom initiator IQN Link kopierenLink in die Zwischenablage kopiert!
Configure the custom initiator iSCSI Qualified Name (IQN) if the iSCSI targets are restricted to certain IQNs, but the nodes that the iSCSI PVs are attached to are not guaranteed to have these IQNs.
Procedure
-
To specify a custom initiator IQN, update the field in the
initiatorNamedefinition object .PersistentVolume
Example PersistentVolume object with a value specified in the initiatorName field.
apiVersion: v1
kind: PersistentVolume
metadata:
name: iscsi-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
iscsi:
targetPortal: 10.0.0.1:3260
portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260']
iqn: iqn.2016-04.test.com:storage.target00
lun: 0
initiatorName: iqn.2016-04.test.com:custom.iqn
fsType: ext4
readOnly: false
- 1
- Specify the name of the initiator.
4.9. Persistent storage using NFS Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform clusters can be provisioned with persistent storage using NFS. Persistent volumes (PVs) and persistent volume claims (PVCs) provide a convenient method for sharing a volume across a project. While the NFS-specific information contained in a PV definition could also be defined directly in a
Pod
4.9.1. Provisioning Link kopierenLink in die Zwischenablage kopiert!
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. To provision NFS volumes, a list of NFS servers and export paths are all that is required.
Procedure
Create an object definition for the PV:
apiVersion: v1 kind: PersistentVolume metadata: name: pv00011 spec: capacity: storage: 5Gi2 accessModes: - ReadWriteOnce3 nfs:4 path: /tmp5 server: 172.17.0.26 persistentVolumeReclaimPolicy: Retain7 - 1
- The name of the volume. This is the PV identity in various
oc <command> podcommands. - 2
- The amount of storage allocated to this volume.
- 3
- Though this appears to be related to controlling access to the volume, it is actually used similarly to labels and used to match a PVC to a PV. Currently, no access rules are enforced based on the
accessModes. - 4
- The volume type being used, in this case the
nfsplugin. - 5
- The path that is exported by the NFS server.
- 6
- The hostname or IP address of the NFS server.
- 7
- The reclaim policy for the PV. This defines what happens to a volume when released.
NoteEach NFS volume must be mountable by all schedulable nodes in the cluster.
Verify that the PV was created:
$ oc get pvExample output
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31sCreate a persistent volume claim that binds to the new PV:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce1 resources: requests: storage: 5Gi2 volumeName: pv0001 storageClassName: ""Verify that the persistent volume claim was created:
$ oc get pvcExample output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m
4.9.2. Enforce disk quotas Link kopierenLink in die Zwischenablage kopiert!
You can use disk partitions to enforce disk quotas and size constraints. Each partition can be its own export. Each export is one PV. OpenShift Container Platform enforces unique names for PVs, but the uniqueness of the NFS volume’s server and path is up to the administrator.
Enforcing quotas in this way allows the developer to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity.
4.9.3. NFS volume security Link kopierenLink in die Zwischenablage kopiert!
This section covers NFS volume security, including matching permissions and SELinux considerations. The user is expected to understand the basics of POSIX permissions, process UIDs, supplemental groups, and SELinux.
Developers request NFS storage by referencing either a PVC by name or the NFS volume plugin directly in the
volumes
Pod
The
/etc/exports
As an example, if the target NFS directory appears on the NFS server as:
$ ls -lZ /opt/nfs -d
Example output
drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs
$ id nfsnobody
Example output
uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)
Then the container must match SELinux labels, and either run with a UID of
65534
nfsnobody
5555
The owner ID of
65534
root_squash
root
0
nfsnobody
65534
65534
4.9.3.1. Group IDs Link kopierenLink in die Zwischenablage kopiert!
The recommended way to handle NFS access, assuming it is not an option to change permissions on the NFS export, is to use supplemental groups. Supplemental groups in OpenShift Container Platform are used for shared storage, of which NFS is an example. In contrast, block storage such as iSCSI uses the
fsGroup
fsGroup
securityContext
To gain access to persistent storage, it is generally preferable to use supplemental group IDs versus user IDs.
Because the group ID on the example target NFS directory is
5555
supplementalGroups
securityContext
spec:
containers:
- name:
...
securityContext:
supplementalGroups: [5555]
Assuming there are no custom SCCs that might satisfy the pod requirements, the pod likely matches the
restricted
supplementalGroups
RunAsAny
As a result, the above pod passes admissions and is launched. However, if group ID range checking is desired, a custom SCC is the preferred solution. A custom SCC can be created such that minimum and maximum group IDs are defined, group ID range checking is enforced, and a group ID of
5555
To use a custom SCC, you must first add it to the appropriate service account. For example, use the
default
Pod
4.9.3.2. User IDs Link kopierenLink in die Zwischenablage kopiert!
User IDs can be defined in the container image or in the
Pod
It is generally preferable to use supplemental group IDs to gain access to persistent storage versus using user IDs.
In the example target NFS directory shown above, the container needs its UID set to
65534
Pod
spec:
containers:
- name:
...
securityContext:
runAsUser: 65534
Assuming that the project is
default
restricted
65534
-
It requests as its user ID.
65534 -
All SCCs available to the pod are examined to see which SCC allows a user ID of . While all policies of the SCCs are checked, the focus here is on user ID.
65534 -
Because all available SCCs use for their
MustRunAsRangestrategy, UID range checking is required.runAsUser -
is not included in the SCC or project’s user ID range.
65534
It is generally considered a good practice not to modify the predefined SCCs. The preferred way to fix this situation is to create a custom SCC A custom SCC can be created such that minimum and maximum user IDs are defined, UID range checking is still enforced, and the UID of
65534
To use a custom SCC, you must first add it to the appropriate service account. For example, use the
default
Pod
4.9.3.3. SELinux Link kopierenLink in die Zwischenablage kopiert!
Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems are configured to use SELinux on remote NFS servers by default.
For non-RHEL and non-RHCOS systems, SELinux does not allow writing from a pod to a remote NFS server. The NFS volume mounts correctly but it is read-only. You will need to enable the correct SELinux permissions by using the following procedure.
Prerequisites
-
The package must be installed. This package provides the
container-selinuxSELinux boolean.virt_use_nfs
Procedure
Enable the
boolean using the following command. Thevirt_use_nfsoption makes this boolean persistent across reboots.-P# setsebool -P virt_use_nfs 1
4.9.3.4. Export settings Link kopierenLink in die Zwischenablage kopiert!
To enable arbitrary container users to read and write the volume, each exported volume on the NFS server should conform to the following conditions:
Every export must be exported using the following format:
/<example_fs> *(rw,root_squash)The firewall must be configured to allow traffic to the mount point.
For NFSv4, configure the default port
(nfs).2049NFSv4
# iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPTFor NFSv3, there are three ports to configure:
(nfs),2049(mountd), and20048(portmapper).111NFSv3
# iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT# iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT# iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT
-
The NFS export and directory must be set up so that they are accessible by the target pods. Either set the export to be owned by the container’s primary UID, or supply the pod group access using , as shown in the group IDs above.
supplementalGroups
4.9.4. Reclaiming resources Link kopierenLink in die Zwischenablage kopiert!
NFS implements the OpenShift Container Platform
Recyclable
By default, PVs are set to
Retain
Once claim to a PVC is deleted, and the PV is released, the PV object should not be reused. Instead, a new PV should be created with the same basic volume details as the original.
For example, the administrator creates a PV named
nfs1
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs1
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.1.1
path: "/"
The user creates
PVC1
nfs1
PVC1
nfs1
nfs1
Released
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs2
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.1.1
path: "/"
Deleting the original PV and re-creating it with the same name is discouraged. Attempting to manually change the status of a PV from
Released
Available
4.9.5. Additional configuration and troubleshooting Link kopierenLink in die Zwischenablage kopiert!
Depending on what version of NFS is being used and how it is configured, there may be additional configuration steps needed for proper export and security mapping. The following are some that may apply:
| NFSv4 mount incorrectly shows all files with ownership of
|
|
| Disabling ID mapping on NFSv4 |
|
4.10. Red Hat OpenShift Data Foundation Link kopierenLink in die Zwischenablage kopiert!
Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. For more information, see the Red Hat OpenShift Data Foundation documentation.
OpenShift Data Foundation on top of Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization, which uses hyperconverged nodes that host virtual machines installed with OpenShift Container Platform, is not a supported configuration. For more information about supported platforms, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Guide.
4.11. Persistent storage using VMware vSphere volumes Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform allows use of VMware vSphere’s Virtual Machine Disk (VMDK) volumes. You can provision your OpenShift Container Platform cluster with persistent storage using VMware vSphere. Some familiarity with Kubernetes and VMware vSphere is assumed.
VMware vSphere volumes can be provisioned dynamically. OpenShift Container Platform creates the disk in vSphere and attaches this disk to the correct image.
OpenShift Container Platform provisions new volumes as independent persistent disks that can freely attach and detach the volume on any node in the cluster. Consequently, you cannot back up volumes that use snapshots, or restore volumes from snapshots. See Snapshot Limitations for more information.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure.
Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.
For vSphere:
For new installations of OpenShift Container Platform 4.13, or later, automatic migration is enabled by default. Updating to OpenShift Container Platform 4.14 and later also provides automatic migration.
CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
- When updating from OpenShift Container Platform 4.12, or earlier, to 4.13, automatic CSI migration for vSphere only occurs if you opt in. If you do not opt in, OpenShift Container Platform defaults to using the in-tree (non-CSI) plugin to provision vSphere storage. Carefully review the indicated consequences before opting in to migration.
4.11.1. Dynamically provisioning VMware vSphere volumes Link kopierenLink in die Zwischenablage kopiert!
Dynamically provisioning VMware vSphere volumes is the recommended method.
4.11.2. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
- An OpenShift Container Platform cluster installed on a VMware vSphere version that meets the requirements for the components that you use. See Installing a cluster on vSphere for information about vSphere version support.
You can use either of the following procedures to dynamically provision these volumes using the default storage class.
4.11.2.1. Dynamically provisioning VMware vSphere volumes using the UI Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform installs a default storage class, named
thin
thin
Prerequisites
- Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Procedure
-
In the OpenShift Container Platform console, click Storage
Persistent Volume Claims. - In the persistent volume claims overview, click Create Persistent Volume Claim.
Define the required options on the resulting page.
-
Select the storage class.
thin - Enter a unique name for the storage claim.
- Select the access mode to determine the read and write access for the created storage claim.
- Define the size of the storage claim.
-
Select the
- Click Create to create the persistent volume claim and generate a persistent volume.
4.11.2.2. Dynamically provisioning VMware vSphere volumes using the CLI Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform installs a default StorageClass, named
thin
thin
Prerequisites
- Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Procedure (CLI)
You can define a VMware vSphere PersistentVolumeClaim by creating a file,
, with the following contents:pvc.yamlkind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc1 spec: accessModes: - ReadWriteOnce2 resources: requests: storage: 1Gi3 Enter the following command to create the
object from the file:PersistentVolumeClaim$ oc create -f pvc.yaml
4.11.3. Statically provisioning VMware vSphere volumes Link kopierenLink in die Zwischenablage kopiert!
To statically provision VMware vSphere volumes you must create the virtual machine disks for reference by the persistent volume framework.
Prerequisites
- Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.
Procedure
Create the virtual machine disks. Virtual machine disks (VMDKs) must be created manually before statically provisioning VMware vSphere volumes. Use either of the following methods:
Create using
. Access ESX through Secure Shell (SSH) and then use following command to create a VMDK volume:vmkfstools$ vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdkCreate using
:vmware-diskmanager$ shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk
Create a persistent volume that references the VMDKs. Create a file,
, with thepv1.yamlobject definition:PersistentVolumeapiVersion: v1 kind: PersistentVolume metadata: name: pv11 spec: capacity: storage: 1Gi2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume:3 volumePath: "[datastore1] volumes/myDisk"4 fsType: ext45 - 1
- The name of the volume. This name is how it is identified by persistent volume claims or pods.
- 2
- The amount of storage allocated to this volume.
- 3
- The volume type used, with
vsphereVolumefor vSphere volumes. The label is used to mount a vSphere VMDK volume into pods. The contents of a volume are preserved when it is unmounted. The volume type supports VMFS and VSAN datastore. - 4
- The existing VMDK volume to use. If you used
vmkfstools, you must enclose the datastore name in square brackets,[], in the volume definition, as shown previously. - 5
- The file system type to mount. For example, ext4, xfs, or other file systems.
ImportantChanging the value of the fsType parameter after the volume is formatted and provisioned can result in data loss and pod failure.
Create the
object from the file:PersistentVolume$ oc create -f pv1.yamlCreate a persistent volume claim that maps to the persistent volume you created in the previous step. Create a file,
, with thepvc1.yamlobject definition:PersistentVolumeClaimapiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc11 spec: accessModes: - ReadWriteOnce2 resources: requests: storage: "1Gi"3 volumeName: pv14 Create the
object from the file:PersistentVolumeClaim$ oc create -f pvc1.yaml
4.11.3.1. Formatting VMware vSphere volumes Link kopierenLink in die Zwischenablage kopiert!
Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system that is specified by the
fsType
PersistentVolume
Because OpenShift Container Platform formats them before the first use, you can use unformatted vSphere volumes as PVs.