Chapter 22. Configuring for VMware vSphere
You can configure OpenShift Container Platform to access VMware vSphere VMDK Volumes. This includes using VMware vSphere VMDK Volumes as persistent storage for application data.
The vSphere Cloud Provider allows using vSphere managed storage within OpenShift Container Platform and supports:
- Volumes
- Persistent volumes
- Storage classes and provisioning volumes
22.1. Before you begin
22.1.1. VMware vSphere cloud provider prerequisites
Prerequisites
Enabling VMware vSphere requires installing the VMware Tools on each Node VM. See Installing VMware tools for more information.
Procedure
- Create a VM folder and move OpenShift Container Platform Node VMs to this folder.
Verify that the Node VM names complies with the regex
[a-z](()?[0-9a-z])?(\.[a-z0-9](([-0-9a-z])?[0-9a-z])?)*
.ImportantVM Names cannot:
- Begin with numbers.
- Have any capital letters.
-
Have any special characters except
-
. - Be shorter than three characters and longer than 63 characters.
Set the
disk.EnableUUID
parameter totrue
for each Node VM. This ensures that the VMware vSphere’s Virtual Machine Disk (VMDK) always presents a consistent UUID to the VM, allowing the disk to be mounted properly.For every vSphere virtual machine node that will be participating in the cluster, follow the steps below using the vSphere console:
Navigate to VM properties
VM Options Advanced Configuration Parameters disk.enableUUID=TRUE Set up the GOVC environment:
curl -LO https://github.com/vmware/govmomi/releases/download/v0.15.0/govc_linux_amd64.gz gunzip govc_linux_amd64.gz chmod +x govc_linux_amd64 cp govc_linux_amd64 /usr/bin/govc export GOVC_URL='vCenter IP OR FQDN' export GOVC_USERNAME='vCenter User' export GOVC_PASSWORD='vCenter Password' export GOVC_INSECURE=1
Find the Node VM paths:
govc ls /datacenter/vm/<vm-folder-name>
Set
disk.EnableUUID
totrue
for all VMs:govc vm.change -e="disk.enableUUID=1" -vm='VM Path'
If OpenShift Container Platform node VMs are created from a template VM, then disk.EnableUUID=1
can be set on the template VM. VMs cloned from this template inherit this property.
Create and assign roles to the vSphere Cloud Provider user and vSphere entities. vSphere Cloud Provider requires the following privileges to interact with vCenter.
Roles Privileges Entities Propagate to Children manage-k8s-node-vms
Resource.AssignVMToPool System.Anonymous System.Read System.View VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.RemoveDisk VirtualMachine.Inventory.Create VirtualMachine.Inventory.Delete
Cluster, Hosts, VM Folder
Yes
manage-k8s-volumes
Datastore.AllocateSpace Datastore.FileManagement System.Anonymous System.Read System.View
Datastore
No
k8s-system-read-and-spbm-profile-view
StorageProfile.View System.Anonymous System.Read System.View
vCenter
No
ReadOnly
System.Anonymous System.Read System.View
Datacenter, Datastore Cluster, Datastore Storage Folder
No
See the vSphere Documentation Center for steps to create a custom role, user, and role assignment.
22.2. Configuring OpenShift Container Platform for vSphere
You can configure OpenShift Container Platform for vSphere in two ways:
22.2.1. Option 1: Configuring OpenShift Container Platform for vSphere using Ansible
You can configure OpenShift Container Platform for VMware vSphere (VCP) by modifying the Ansible inventory file. These changes can be made before installation, or to an existing cluster.
Procedure
Add the following to the Ansible inventory file:
[OSEv3:vars] openshift_cloudprovider_kind=vsphere openshift_cloudprovider_vsphere_username=administrator@vsphere.local 1 openshift_cloudprovider_vsphere_password=<password> openshift_cloudprovider_vsphere_host=10.x.y.32 2 openshift_cloudprovider_vsphere_datacenter=<Datacenter> 3 openshift_cloudprovider_vsphere_datastore=<Datastore> 4
Run the
deploy_cluster.yml
playbook.$ ansible-playbook -i <inventory_file> \ playbooks/deploy_cluster.yml
Installing with Ansible also creates and configures the following files to fit your vSphere environment:
- /etc/origin/cloudprovider/vsphere.conf
- /etc/origin/master/master-config.yaml
- /etc/origin/node/node-config.yaml
As a reference, a full inventory is shown as follows:
The openshift_cloudprovider_vsphere_
values are required for OpenShift Container Platform to be able to create vSphere
resources such as VMDKs on datastores for persistent volumes.
$ cat /etc/ansible/hosts [OSEv3:children] ansible masters infras apps etcd nodes lb [OSEv3:vars] become=yes ansible_become=yes ansible_user=root openshift_release="v3.10" openshift_version="3.10" openshift_deployment_type=openshift-enterprise # Required per https://access.redhat.com/solutions/3480921 oreg_url=registry.access.redhat.com/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true # vSphere Cloud provider openshift_cloudprovider_kind=vsphere openshift_cloudprovider_vsphere_username="administrator@vsphere.local" openshift_cloudprovider_vsphere_password="password" openshift_cloudprovider_vsphere_host="vcsa65-dc1.example.com" openshift_cloudprovider_vsphere_datacenter=Datacenter openshift_cloudprovider_vsphere_cluster=Cluster openshift_cloudprovider_vsphere_resource_pool=ResourcePool openshift_cloudprovider_vsphere_datastore="datastore" openshift_cloudprovider_vsphere_folder="folder" # Service catalog openshift_hosted_etcd_storage_kind=dynamic openshift_hosted_etcd_storage_volume_name=etcd-vol openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"] openshift_hosted_etcd_storage_volume_size=1G openshift_hosted_etcd_storage_labels={'storage': 'etcd'} openshift_master_ldap_ca_file=/home/cloud-user/mycert.crt openshift_master_identity_providers=[{'name': 'idm', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': 'uid=admin,cn=users,cn=accounts,dc=example,dc=com', 'bindPassword': 'ldapadmin', 'ca': '/etc/origin/master/ca.crt', 'insecure': 'false', 'url': 'ldap://ldap.example.com/cn=users,cn=accounts,dc=example,dc=com?uid?sub?(memberOf=cn=ose-user,cn=groups,cn=accounts,dc=openshift,dc=com)'}] # Setup vsphere registry storage openshift_hosted_registry_storage_kind=vsphere openshift_hosted_registry_storage_access_modes=['ReadWriteOnce'] openshift_hosted_registry_storage_annotations=['volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume'] openshift_hosted_registry_replicas=1 openshift_hosted_router_replicas=3 openshift_master_cluster_method=native openshift_node_local_quota_per_fsgroup=512Mi default_subdomain=example.com openshift_master_cluster_hostname=openshift.example.com openshift_master_cluster_public_hostname=openshift.example.com openshift_master_default_subdomain=apps.example.com os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy' osm_use_cockpit=true # Red Hat subscription name and password rhsub_user=username rhsub_pass=password rhsub_pool=8a85f9815e9b371b015e9b501d081d4b # metrics openshift_metrics_install_metrics=true openshift_metrics_storage_kind=dynamic openshift_metrics_storage_volume_size=25Gi # logging openshift_logging_install_logging=true openshift_logging_es_pvc_dynamic=true openshift_logging_es_pvc_size=30Gi openshift_logging_elasticsearch_storage_type=pvc openshift_logging_es_cluster_size=1 openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_fluentd_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_storage_kind=dynamic #registry openshift_public_hostname=openshift.example.com [ansible] localhost [masters] master-0.example.com vm_name=master-0 ipv4addr=10.x.y.103 master-1.example.com vm_name=master-1 ipv4addr=10.x.y.104 master-2.example.com vm_name=master-2 ipv4addr=10.x.y.105 [infras] infra-0.example.com vm_name=infra-0 ipv4addr=10.x.y.100 infra-1.example.com vm_name=infra-1 ipv4addr=10.x.y.101 infra-2.example.com vm_name=infra-2 ipv4addr=10.x.y.102 [apps] app-0.example.com vm_name=app-0 ipv4addr=10.x.y.106 app-1.example.com vm_name=app-1 ipv4addr=10.x.y.107 app-2.example.com vm_name=app-2 ipv4addr=10.x.y.108 [etcd] master-0.example.com master-1.example.com master-2.example.com [lb] haproxy-0.example.com vm_name=haproxy-0 ipv4addr=10.x.y.200 [nodes] master-0.example.com openshift_node_group_name="node-config-master" openshift_schedulable=true master-1.example.com openshift_node_group_name="node-config-master" openshift_schedulable=true master-2.example.com openshift_node_group_name="node-config-master" openshift_schedulable=true infra-0.example.com openshift_node_group_name="node-config-infra" infra-1.example.com openshift_node_group_name="node-config-infra" infra-2.example.com openshift_node_group_name="node-config-infra" app-0.example.com openshift_node_group_name="node-config-compute" app-1.example.com openshift_node_group_name="node-config-compute" app-2.example.com openshift_node_group_name="node-config-compute"
Deploying a vSphere VM environment is not officially supported by Red Hat, but it can be configured.
22.2.2. Option 2: Manually configuring OpenShift Container Platform for vSphere
22.2.2.1. Manually configuring master hosts for vSphere
Perform the following on all master hosts.
Procedure
Edit the master configuration file at /etc/origin/master/master-config.yaml by default on all masters and update the contents of the
apiServerArguments
andcontrollerArguments
sections:kubernetesMasterConfig: ... apiServerArguments: cloud-provider: - "vsphere" cloud-config: - "/etc/origin/cloudprovider/vsphere.conf" controllerArguments: cloud-provider: - "vsphere" cloud-config: - "/etc/origin/cloudprovider/vsphere.conf"
ImportantWhen triggering a containerized installation, only the /etc/origin and /var/lib/origin directories are mounted to the master and node container. Therefore, master-config.yaml must be in /etc/origin/master rather than /etc/.
When you configure OpenShift Container Platform for vSphere using Ansible, the /etc/origin/cloudprovider/vsphere.conf file is created automatically. Because you are manually configuring OpenShift Container Platform for vSphere, you must create the file. Before you create the file, decide if you want multiple vCenter zones or not.
The cluster installation process configures single-zone or single vCenter by default. However, deploying OpenShift Container Platform in vSphere on different zones can be helpful to avoid single-point-of-failures, but creates the need for shared storage across zones. If an OpenShift Container Platform node host goes down in zone "A" and the pods should be moved to zone "B". See Multiple zone limitations in the Kubernetes documentation for more information.
To configure a single vCenter server, use the following format for the /etc/origin/cloudprovider/vsphere.conf file:
[Global] 1 user = "myusername" 2 password = "mypassword" 3 port = "443" 4 insecure-flag = "1" 5 datacenters = "mydatacenter" 6 [VirtualCenter "10.10.0.2"] 7 user = "myvCenterusername" password = "password" [Workspace] 8 server = "10.10.0.2" 9 datacenter = "mydatacenter" folder = "path/to/vms" 10 default-datastore = "shared-datastore" 11 resourcepool-path = "myresourcepoolpath" 12 [Disk] scsicontrollertype = pvscsi 13 [Network] public-network = "VM Network" 14
- 1
- Any properties set in the
[Global]
section are used for all specified vcenters unless overriden by the settings in the individual[VirtualCenter]
sections. - 2
- vCenter username for the vSphere cloud provider.
- 3
- vCenter password for the specified user.
- 4
- Optional. Port number for the vCenter server. Defaults to port
443
. - 5
- Set to
1
if the vCenter uses a self-signed certificate. - 6
- Name of the data center on which Node VMs are deployed.
- 7
- Override specific
[Global]
properties for this Virtual Center. Possible setting scan be[Port]
,[user]
,[insecure-flag]
,[datacenters]
. Any settings not specified are pulled from the[Global]
section. - 8
- Set any properties used for various vSphere Cloud Provider functionality. For example, dynamic provisioning, Storage Profile Based Volume provisioning, and others.
- 9
- IP Address or FQDN for the vCenter server.
- 10
- Path to the VM directory for node VMs.
- 11
- Set to the name of the datastore to use for provisioning volumes using the storage classes or dynamic provisioning. Prior to OpenShift Container Platform 3.9, if the datastore was located in a storage directory or is a member of a datastore cluster, the full path was required.
- 12
- Optional. Set to the path to the resource pool where dummy VMs for Storage Profile Based volume provisioning must be created.
- 13
- Type of SCSI controller the VMDK will be attached to the VM as.
- 14
- Set to the network port group for vSphere to access the node, which is called VM Network by default. This is the node host’s ExternalIP that is registered with Kubernetes.
To configure a multiple vCenter servers, use the following format for the /etc/origin/cloudprovider/vsphere.conf file:
[Global] 1 user = "myusername" 2 password = "mypassword" 3 port = "443" 4 insecure-flag = "1" 5 datacenters = "us-east, us-west" 6 [VirtualCenter "10.10.0.2"] 7 user = "myvCenterusername" password = "password" [VirtualCenter "10.10.0.3"] port = "448" insecure-flag = "0" [Workspace] 8 server = "10.10.0.2" 9 datacenter = "mydatacenter" folder = "path/to/vms" 10 default-datastore = "shared-datastore" 11 resourcepool-path = "myresourcepoolpath" 12 [Disk] scsicontrollertype = pvscsi 13 [Network] public-network = "VM Network" 14
- 1
- Any properties set in the
[Global]
section are used for all specified vcenters unless overriden by the settings in the individual[VirtualCenter]
sections. - 2
- vCenter username for the vSphere cloud provider.
- 3
- vCenter password for the specified user.
- 4
- Optional. Port number for the vCenter server. Defaults to port
443
. - 5
- Set to
1
if the vCenter uses a self-signed certificate. - 6
- Name of the data centers on which Node VMs are deployed.
- 7
- Override specific
[Global]
properties for this Virtual Center. Possible setting scan be[Port]
,[user]
,[insecure-flag]
,[datacenters]
. Any settings not specified are pulled from the[Global]
section. - 8
- Set any properties used for various vSphere Cloud Provider functionality. For example, dynamic provisioning, Storage Profile Based Volume provisioning, and others.
- 9
- IP Address or FQDN for the vCenter server where the Cloud Provider communicates.
- 10
- Path to the VM directory for node VMs.
- 11
- Set to the name of the datastore to use for provisioning volumes using the storage classes or dynamic provisioning. Prior to OpenShift Container Platform 3.9, if the datastore was located in a storage directory or is a member of a datastore cluster, the full path was required.
- 12
- Optional. Set to the path to the resource pool where dummy VMs for Storage Profile Based volume provisioning must be created.
- 13
- Type of SCSI controller the VMDK will be attached to the VM as.
- 14
- Set to the network port group for vSphere to access the node, which is called VM Network by default. This is the node host’s ExternalIP that is registered with Kubernetes.
ImportantThis ensures that the VMDK always presents a consistent UUID to the VM, allowing the disk to be mounted properly.
For every virtual machine node that will be participating in the cluster: VM properties
VM Options Advanced Configuration Parameters disk.enableUUID=TRUE Alternatively, the GOVC tool can be used:
Set up the GOVC environment:
export GOVC_URL='vCenter IP OR FQDN' export GOVC_USERNAME='vCenter User' export GOVC_PASSWORD='vCenter Password' export GOVC_INSECURE=1
Find the Node VM paths:
govc ls /datacenter/vm/<vm-folder-name>
Set disk.EnableUUID to true for all VMs:
govc vm.change -e="disk.enableUUID=1" -vm='VM Path'
NoteIf OpenShift Container Platform node VMs are created from a template VM, then
disk.EnableUUID=1
can be set on the template VM. VMs cloned from this template inherit this property.
Restart the OpenShift Container Platform host services:
# master-restart api # master-restart controllers # systemctl restart atomic-openshift-node
22.2.2.2. Manually configuring node hosts for vSphere
Perform the following on all node hosts.
Procedure
To configure the OpenShift Container Platform nodes for vSphere:
Edit the appropriate node configuration map and update the contents of the
kubeletArguments
section:kubeletArguments: cloud-provider: - "vsphere" cloud-config: - "/etc/origin/cloudprovider/vsphere.conf"
ImportantThe
nodeName
must match the VM name in vSphere in order for the cloud provider integration to work properly. The name must also be RFC1123 compliant.Restart the OpenShift Container Platform services on all nodes.
# systemctl restart atomic-openshift-node
22.2.2.3. Applying Configuration Changes
Start or restart OpenShift Container Platform services on all master and node hosts to apply your configuration changes, see Restarting OpenShift Container Platform services:
# master-restart api # master-restart controllers # systemctl restart atomic-openshift-node
Switching from not using a cloud provider to using a cloud provider produces an error message. Adding the cloud provider tries to delete the node because the node switches from using the hostname as the externalID
(which would have been the case when no cloud provider was being used) to using the cloud provider’s instance-id
(which is what the cloud provider specifies). To resolve this issue:
- Log in to the CLI as a cluster administrator.
Check and back up existing node labels:
$ oc describe node <node_name> | grep -Poz '(?s)Labels.*\n.*(?=Taints)'
Delete the nodes:
$ oc delete node <node_name>
On each node host, restart the OpenShift Container Platform service.
# systemctl restart atomic-openshift-node
- Add back any labels on each node that you previously had.
22.2.3. Configuring OpenShift Container Platform to use vSphere storage
OpenShift Container Platform supports VMware vSphere’s Virtual Machine Disk (VMDK) volumes. You can provision your OpenShift Container Platform cluster with persistent storage using VMware vSphere. Some familiarity with Kubernetes and VMware vSphere is assumed.
OpenShift Container Platform creates the disk in vSphere and attaches the disk to the proper instance.
The OpenShift Container Platform persistent volume (PV) framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. vSphere VMDK volumes can be provisioned dynamically.
PVs are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. PV claims, however, are specific to a project or namespace and can be requested by users.
High availability of storage in the infrastructure is left to the underlying storage provider.
Prerequisites
Before creating PVs using vSphere, ensure your OpenShift Container Platform cluster meets the following requirements:
- OpenShift Container Platform must first be configured for vSphere Cloud Provider
- Each node host in the infrastructure must match the vSphere VM name.
- Each node host must be in the same resource group.
22.2.3.1. Provisioning VMware vSphere volumes
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. After ensuring OpenShift Container Platform is configured for vSphere, all that is required for OpenShift Container Platform and vSphere is a VM folder path, file system type, and the PersistentVolume
API.
22.2.3.1.1. Creating persistent volumes
Define a PV object definition, for example vsphere-pv.yaml:
apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 2Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: "[datastore1] volumes/myDisk" 4 fsType: ext4 5
- 1
- The name of the volume. This must be how it is identified by PV claims or from pods.
- 2
- The amount of storage allocated to this volume.
- 3
- The volume type being used. This example uses
vsphereVolume
, and the label is used to mount a vSphere VMDK volume into pods. The contents of a volume are preserved when it is unmounted. The volume type supports VMFS and VSAN datastore. - 4
- This VMDK volume must exist, and you must include brackets ([]) in the volume definition.
- 5
- The file system type to mount. For example,
ext4
,xfs
, or other file-systems.
ImportantChanging the value of the
fsType
parameter after the volume is formatted and provisioned can result in data loss and pod failure.Create the PV:
$ oc create -f vsphere-pv.yaml persistentvolume "pv0001" created
Verify that the PV was created:
$ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 2Gi RWO Available 2s
Now you can request storage using PV claims, which can now use your PV.
PV claims only exist in the user’s namespace and can only be referenced by a pod within that same namespace. Any attempt to access a PV from a different namespace causes the pod to fail.
22.2.3.1.2. Formatting VMware vSphere volumes
Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the fsType
parameter in the PV definition. If the device is not formatted with the file system, all data from the device is erased, and the device is automatically formatted with the given file system.
This allows unformatted vSphere volumes to be used as PVs, because OpenShift Container Platform formats them before the first use.
22.2.3.2. Provisioning VMware vSphere volumes via a Storage Class
OpenShift Container Platform creates the following
storageclass
when you use thevsphere-volume
provisioner and if you use theopenshift_cloudprovider_kind=vsphere
andopenshift_vsphere_*
variables in the Ansible inventory. Otherwise, you can create it manually:$ oc get --export storageclass vsphere-standard -o yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: "vsphere-standard" 1 provisioner: kubernetes.io/vsphere-volume 2 parameters: diskformat: zeroedthick 3 datastore: "ose3-vmware" 4 reclaimPolicy: Delete
After you request a PV and using the storageclass shown in the previous step, OpenShift Container Platform creates VMDK disks in the vSphere infrastructure. To verify that the disks were created:
$ ls /vmfs/volumes/ose3-vmware/kubevols | grep kubernetes kubernetes-dynamic-pvc-790615e8-a22a-11e8-bc85-0050568e2982.vmdk
vSphere-volume disks are ReadWriteOnce
access mode, which means the volume can be mounted as read-write by a single node. See the Access modes section of the Architecture guide for more information.
22.2.4. About Red Hat OpenShift Container Storage
Red Hat OpenShift Container Storage (RHOCS) is a provider of agnostic persistent storage for OpenShift Container Platform either in-house or in hybrid clouds. As a Red Hat storage solution, RHOCS is completely integrated with OpenShift Container Platform for deployment, management, and monitoring regardless if it is installed on OpenShift Container Platform (converged) or with OpenShift Container Platform (independent). OpenShift Container Storage is not limited to a single availability zone or node, which makes it likely to survive an outage. You can find complete instructions for using RHOCS in the RHOCS 3.10 Deployment Guide.
22.2.5. Configuring the OpenShift Container Platform registry for vSphere
The following steps define the manual process of storage creation, which is used to create storage for the registry if a storage class is unavailable or not used.
# VMFS cd /vmfs/volumes/datastore1/ mkdir kubevols # Not needed but good hygiene # VSAN cd /vmfs/volumes/vsanDatastore/ /usr/lib/vmware/osfs/bin/osfs-mkdir kubevols # Needed cd kubevols vmkfstools -c 25G registry.vmdk
22.2.5.1. Configuring the OpenShift Container Platform registry for vSphere using Ansible
Procedure
To configure the Ansible inventory for the registry to use a vSphere volume:
[OSEv3:vars] # vSphere Provider Configuration openshift_hosted_registry_storage_kind=vsphere 1 openshift_hosted_registry_storage_access_modes=['ReadWriteOnce'] 2 openshift_hosted_registry_storage_annotations=['volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume'] 3 openshift_hosted_registry_replicas=1 4
The brackets in the configuration file above are required.
22.2.5.2. Manually configuring OpenShift Container Platform registry for vSphere
To use vSphere volume storage, edit the registry’s configuration file and mount to the registry pod.
Procedure
Create a new configuration file from the vSphere volume:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: vsphere-registry-storage annotations: volume.beta.kubernetes.io/storage-class: vsphere-standard spec: accessModes: - ReadWriteOnce resources: requests: storage: 30Gi
Create the file in OpenShift Container Platform:
$ oc create -f pvc-registry.yaml
Update the volume configuration to use the new PVC:
$ oc volume dc docker-registry --add --name=registry-storage -t \ pvc --claim-name=vsphere-registry-storage --overwrite
Redeploy the registry to read the updated configuration:
$ oc rollout latest docker-registry -n default
Verify the volume has been assigned:
$ oc volume dc docker-registry -n default
22.3. Backup of persistent volumes
OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots.
To create a backup of PVs:
- Stop the application using the PV.
- Clone the persistent disk.
- Restart the application.
- Create a backup of the cloned disk.
- Delete the cloned disk.