Chapter 25. Configuring for VMware vSphere
You can configure OpenShift Container Platform to use VMware vSphere VMDKs as to back PersistentVolumes. This configuration can include using VMware vSphere VMDKs as persistent storage for application data.
The vSphere Cloud Provider allows using vSphere-managed storage in OpenShift Container Platform and supports every storage primitive that Kubernetes uses:
- PersistentVolume (PV)
- PersistentVolumesClaim (PVC)
- StorageClass
PersistentVolumes requested by stateful containerized applications can be provisioned on VMware vSAN, VVOL, VMFS, or NFS datastores.
Kubernetes PVs are defined in Pod specifications. They can reference VMDK files directly if you use Static Provisioning or PVCs when you use Dynamic Provisioning, which is preferred.
The latest updates to the vSphere Cloud Provider are in vSphere Storage for Kubernetes.
25.1. Before you begin
25.1.1. Requirements
VMware vSphere
Standalone ESXi is not supported.
- vSphere version 6.0.x minimum recommended version 6.7 U1b is required if you intend to support a complete VMware Validate Design.
vSAN, VMFS and NFS supported.
- vSAN support is limited to one cluster in one vCenter.
OpenShift Container Platform 3.11 is supported and deploys on vSphere 7 clusters. If you use the vSphere in-tree storage driver, vSAN, VMFS and NFS storage options are also supported.
Prerequisites
You must install the VMware Tools on each Node VM. See Installing VMware tools for more information.
You can use the open source VMware govmomi
CLI tool for additional configuration and troubleshooting. For example, see the following govc
CLI configuration:
export GOVC_URL='vCenter IP OR FQDN' export GOVC_USERNAME='vCenter User' export GOVC_PASSWORD='vCenter Password' export GOVC_INSECURE=1
25.1.1.1. Permissions
Create and assign roles to the vSphere Cloud Provider. A vCenter user with the required set of privileges is required.
In general, the vSphere user designated to the vSphere Cloud Provider must have the following permissions:
-
Read
permission on the parent entities of the node VMs such asfolder
,host
, datacenter, datastore folder, datastore cluster, and so on. -
VirtualMachine.Inventory.Create/Delete
permission on thevsphere.conf
defined resource pool - this is used to create and delete test VMs.
See the vSphere Documentation Center for steps to create a custom role, user, and role assignment.
vSphere Cloud Provider supports OpenShift Container Platform clusters that span multiple vCenters. Make sure that all above privileges are correctly set for all vCenters.
Dynamic persistent volume creation is the recommended practice.
Roles | Privileges | Entities | Propagate to children |
---|---|---|---|
manage-k8s-node-vms | Resource.AssignVMToPool, VirtualMachine.Config.AddExistingDisk, VirtualMachine.Config.AddNewDisk, VirtualMachine.Config.AddRemoveDevice, VirtualMachine.Config.RemoveDisk, VirtualMachine.Inventory.Create, VirtualMachine.Inventory.Delete, VirtualMachine.Config.Settings | Cluster, Hosts, VM Folder | Yes |
manage-k8s-volumes | Datastore.AllocateSpace, Datastore.FileManagement (Low level file operations) | Datastore | No |
k8s-system-read-and-spbm-profile-view | StorageProfile.View (Profile-driven storage view) | vCenter | No |
Read-only (pre-existing default role) | System.Anonymous, System.Read, System.View | Datacenter, Datastore Cluster, Datastore Storage Folder | No |
Datastore.FileManagement is required for only the manage-k8s-volumes role, if you create PVCs to bind with statically provisioned PVs and set the reclaim policy to delete. When the PVC is deleted, associated statically provisioned PVs are also deleted.
Roles | Privileges | Entities | Propagate to Children |
---|---|---|---|
manage-k8s-node-vms | VirtualMachine.Config.AddExistingDisk, VirtualMachine.Config.AddNewDisk, VirtualMachine.Config.AddRemoveDevice, VirtualMachine.Config.RemoveDisk | VM Folder | Yes |
manage-k8s-volumes | Datastore.FileManagement (Low level file operations) | Datastore | No |
Read-only (pre-existing default role) | System.Anonymous, System.Read, System.View | vCenter, Datacenter, Datastore Cluster, Datastore Storage Folder, Cluster, Hosts | No … |
Procedure
- Create a VM folder and move OpenShift Container Platform Node VMs to this folder.
Set the
disk.EnableUUID
parameter totrue
for each Node VM. This setting ensures that the VMware vSphere’s Virtual Machine Disk (VMDK) always presents a consistent UUID to the VM, allowing the disk to be mounted properly.Every VM node that will be participating in the cluster must have the
disk.EnableUUID
parameter set totrue
. To set this value, follow the steps for either the vSphere console orgovc
CLI tool:-
From the vSphere HTML Client navigate to VM properties
VM Options Advanced Configuration Parameters disk.enableUUID=TRUE Or using the govc CLI, find the Node VM paths:
$govc ls /datacenter/vm/<vm-folder-name>
Set
disk.EnableUUID
totrue
for all VMs:$govc vm.change -e="disk.enableUUID=1" -vm='VM Path'
-
From the vSphere HTML Client navigate to VM properties
If OpenShift Container Platform node VMs are created from a virtual machine template, then you can set disk.EnableUUID=1
on the template VM. VMs cloned from this template inherit this property.
25.1.1.2. Using OpenShift Container Platform with vMotion
OpenShift Container Platform generally supports compute-only vMotion. Using Storage vMotion can cause issues and is not supported.
If you are using vSphere volumes in your pods, migrating a VM across datastores either manually or through Storage vMotion causes invalid references within OpenShift Container Platform persistent volume (PV) objects. These references prevent affected pods from starting up and can result in data loss.
Similarly, OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs.
25.2. Configuring OpenShift Container Platform for vSphere
You can configure OpenShift Container Platform for vSphere in two ways:
25.2.1. Option 1: Configuring OpenShift Container Platform for vSphere using Ansible
You can configure OpenShift Container Platform for VMware vSphere (VCP) by modifying the Ansible inventory file. These changes can be made before installation, or to an existing cluster.
Procedure
Add the following to the Ansible inventory file:
[OSEv3:vars] openshift_cloudprovider_kind=vsphere openshift_cloudprovider_vsphere_username=administrator@vsphere.local 1 openshift_cloudprovider_vsphere_password=<password> openshift_cloudprovider_vsphere_host=10.x.y.32 2 openshift_cloudprovider_vsphere_datacenter=<Datacenter> 3 openshift_cloudprovider_vsphere_datastore=<Datastore> 4
Run the
deploy_cluster.yml
playbook.$ ansible-playbook -i <inventory_file> \ playbooks/deploy_cluster.yml
Installing with Ansible also creates and configures the following files to fit your vSphere environment:
- /etc/origin/cloudprovider/vsphere.conf
- /etc/origin/master/master-config.yaml
- /etc/origin/node/node-config.yaml
As a reference, a full inventory is shown as follows:
The openshift_cloudprovider_vsphere_
values are required for OpenShift Container Platform to be able to create vSphere
resources such as VMDKs on datastores for persistent volumes.
$ cat /etc/ansible/hosts [OSEv3:children] ansible masters infras apps etcd nodes lb [OSEv3:vars] become=yes ansible_become=yes ansible_user=root oreg_auth_user=service_account 1 oreg_auth_password=service_account_token 2 openshift_deployment_type=openshift-enterprise # Required per https://access.redhat.com/solutions/3480921 oreg_url=registry.access.redhat.com/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true # vSphere Cloud provider openshift_cloudprovider_kind=vsphere openshift_cloudprovider_vsphere_username="administrator@vsphere.local" openshift_cloudprovider_vsphere_password="password" openshift_cloudprovider_vsphere_host="vcsa65-dc1.example.com" openshift_cloudprovider_vsphere_datacenter=Datacenter openshift_cloudprovider_vsphere_cluster=Cluster openshift_cloudprovider_vsphere_resource_pool=ResourcePool openshift_cloudprovider_vsphere_datastore="datastore" openshift_cloudprovider_vsphere_folder="folder" # Service catalog openshift_hosted_etcd_storage_kind=dynamic openshift_hosted_etcd_storage_volume_name=etcd-vol openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"] openshift_hosted_etcd_storage_volume_size=1G openshift_hosted_etcd_storage_labels={'storage': 'etcd'} openshift_master_ldap_ca_file=/home/cloud-user/mycert.crt openshift_master_identity_providers=[{'name': 'idm', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': 'uid=admin,cn=users,cn=accounts,dc=example,dc=com', 'bindPassword': 'ldapadmin', 'ca': '/etc/origin/master/ca.crt', 'insecure': 'false', 'url': 'ldap://ldap.example.com/cn=users,cn=accounts,dc=example,dc=com?uid?sub?(memberOf=cn=ose-user,cn=groups,cn=accounts,dc=openshift,dc=com)'}] # Setup vsphere registry storage openshift_hosted_registry_storage_kind=vsphere openshift_hosted_registry_storage_access_modes=['ReadWriteOnce'] openshift_hosted_registry_storage_annotations=['volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume'] openshift_hosted_registry_replicas=1 openshift_hosted_router_replicas=3 openshift_master_cluster_method=native openshift_node_local_quota_per_fsgroup=512Mi default_subdomain=example.com openshift_master_cluster_hostname=openshift.example.com openshift_master_cluster_public_hostname=openshift.example.com openshift_master_default_subdomain=apps.example.com os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy' osm_use_cockpit=true # Red Hat subscription name and password rhsub_user=username rhsub_pass=password rhsub_pool=8a85f9815e9b371b015e9b501d081d4b # metrics openshift_metrics_install_metrics=true openshift_metrics_storage_kind=dynamic openshift_metrics_storage_volume_size=25Gi # logging openshift_logging_install_logging=true openshift_logging_es_pvc_dynamic=true openshift_logging_es_pvc_size=30Gi openshift_logging_elasticsearch_storage_type=pvc openshift_logging_es_cluster_size=1 openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_fluentd_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_storage_kind=dynamic #registry openshift_public_hostname=openshift.example.com [ansible] localhost [masters] master-0.example.com vm_name=master-0 ipv4addr=10.x.y.103 master-1.example.com vm_name=master-1 ipv4addr=10.x.y.104 master-2.example.com vm_name=master-2 ipv4addr=10.x.y.105 [infras] infra-0.example.com vm_name=infra-0 ipv4addr=10.x.y.100 infra-1.example.com vm_name=infra-1 ipv4addr=10.x.y.101 infra-2.example.com vm_name=infra-2 ipv4addr=10.x.y.102 [apps] app-0.example.com vm_name=app-0 ipv4addr=10.x.y.106 app-1.example.com vm_name=app-1 ipv4addr=10.x.y.107 app-2.example.com vm_name=app-2 ipv4addr=10.x.y.108 [etcd] master-0.example.com master-1.example.com master-2.example.com [lb] haproxy-0.example.com vm_name=haproxy-0 ipv4addr=10.x.y.200 [nodes] master-0.example.com openshift_node_group_name="node-config-master" openshift_schedulable=true master-1.example.com openshift_node_group_name="node-config-master" openshift_schedulable=true master-2.example.com openshift_node_group_name="node-config-master" openshift_schedulable=true infra-0.example.com openshift_node_group_name="node-config-infra" infra-1.example.com openshift_node_group_name="node-config-infra" infra-2.example.com openshift_node_group_name="node-config-infra" app-0.example.com openshift_node_group_name="node-config-compute" app-1.example.com openshift_node_group_name="node-config-compute" app-2.example.com openshift_node_group_name="node-config-compute"
- 1 2
- If you use a container registry that requires authentication, such as the default container image registry, specify the credentials for that account. See Accessing and Configuring the Red Hat Registry.
Deploying a vSphere VM environment is not officially supported by Red Hat, but it can be configured.
25.2.2. Option 2: Manually configuring OpenShift Container Platform for vSphere
25.2.2.1. Manually configuring master hosts for vSphere
Perform the following on all master hosts.
Procedure
Edit the master configuration file at /etc/origin/master/master-config.yaml by default on all masters and update the contents of the
apiServerArguments
andcontrollerArguments
sections:kubernetesMasterConfig: ... apiServerArguments: cloud-provider: - "vsphere" cloud-config: - "/etc/origin/cloudprovider/vsphere.conf" controllerArguments: cloud-provider: - "vsphere" cloud-config: - "/etc/origin/cloudprovider/vsphere.conf"
ImportantWhen triggering a containerized installation, only the /etc/origin and /var/lib/origin directories are mounted to the master and node container. Therefore, master-config.yaml must be in /etc/origin/master rather than /etc/.
When you configure OpenShift Container Platform for vSphere using Ansible, the /etc/origin/cloudprovider/vsphere.conf file is created automatically. Because you are manually configuring OpenShift Container Platform for vSphere, you must create the file. Before you create the file, decide if you want multiple vCenter zones or not.
The cluster installation process configures single-zone or single vCenter by default. However, deploying OpenShift Container Platform in vSphere on different zones can be helpful to avoid single-point-of-failures, but creates the need for shared storage across zones. If an OpenShift Container Platform node host goes down in zone "A" and the pods should be moved to zone "B". See Running in multiple zones in the Kubernetes documentation for more information.
To configure a single vCenter server, use the following format for the /etc/origin/cloudprovider/vsphere.conf file:
[Global] 1 user = "myusername" 2 password = "mypassword" 3 port = "443" 4 insecure-flag = "1" 5 datacenters = "mydatacenter" 6 [VirtualCenter "10.10.0.2"] 7 user = "myvCenterusername" password = "password" [Workspace] 8 server = "10.10.0.2" 9 datacenter = "mydatacenter" folder = "path/to/vms" 10 default-datastore = "shared-datastore" 11 resourcepool-path = "myresourcepoolpath" 12 [Disk] scsicontrollertype = pvscsi 13 [Network] public-network = "VM Network" 14
- 1
- Any properties set in the
[Global]
section are used for all specified vcenters unless overriden by the settings in the individual[VirtualCenter]
sections. - 2
- vCenter username for the vSphere cloud provider.
- 3
- vCenter password for the specified user.
- 4
- Optional. Port number for the vCenter server. Defaults to port
443
. - 5
- Set to
1
if the vCenter uses a self-signed certificate. - 6
- Name of the data center on which Node VMs are deployed.
- 7
- Override specific
[Global]
properties for this Virtual Center. Possible setting scan be[Port]
,[user]
,[insecure-flag]
,[datacenters]
. Any settings not specified are pulled from the[Global]
section. - 8
- Set any properties used for various vSphere Cloud Provider functionality. For example, dynamic provisioning, Storage Profile Based Volume provisioning, and others.
- 9
- IP Address or FQDN for the vCenter server.
- 10
- Path to the VM directory for node VMs.
- 11
- Set to the name of the datastore to use for provisioning volumes using the storage classes or dynamic provisioning. Prior to OpenShift Container Platform 3.9, if the datastore was located in a storage directory or is a member of a datastore cluster, the full path was required.
- 12
- Optional. Set to the path to the resource pool where dummy VMs for Storage Profile Based volume provisioning must be created.
- 13
- Type of SCSI controller the VMDK will be attached to the VM as.
- 14
- Set to the network port group for vSphere to access the node, which is called VM Network by default. This is the node host’s ExternalIP that is registered with Kubernetes.
To configure a multiple vCenter servers, use the following format for the /etc/origin/cloudprovider/vsphere.conf file:
[Global] 1 user = "myusername" 2 password = "mypassword" 3 port = "443" 4 insecure-flag = "1" 5 datacenters = "us-east, us-west" 6 [VirtualCenter "10.10.0.2"] 7 user = "myvCenterusername" password = "password" [VirtualCenter "10.10.0.3"] port = "448" insecure-flag = "0" [Workspace] 8 server = "10.10.0.2" 9 datacenter = "mydatacenter" folder = "path/to/vms" 10 default-datastore = "shared-datastore" 11 resourcepool-path = "myresourcepoolpath" 12 [Disk] scsicontrollertype = pvscsi 13 [Network] public-network = "VM Network" 14
- 1
- Any properties set in the
[Global]
section are used for all specified vcenters unless overriden by the settings in the individual[VirtualCenter]
sections. - 2
- vCenter username for the vSphere cloud provider.
- 3
- vCenter password for the specified user.
- 4
- Optional. Port number for the vCenter server. Defaults to port
443
. - 5
- Set to
1
if the vCenter uses a self-signed certificate. - 6
- Name of the data centers on which Node VMs are deployed.
- 7
- Override specific
[Global]
properties for this Virtual Center. Possible setting scan be[Port]
,[user]
,[insecure-flag]
,[datacenters]
. Any settings not specified are pulled from the[Global]
section. - 8
- Set any properties used for various vSphere Cloud Provider functionality. For example, dynamic provisioning, Storage Profile Based Volume provisioning, and others.
- 9
- IP Address or FQDN for the vCenter server where the Cloud Provider communicates.
- 10
- Path to the VM directory for node VMs.
- 11
- Set to the name of the datastore to use for provisioning volumes using the storage classes or dynamic provisioning. Prior to OpenShift Container Platform 3.9, if the datastore was located in a storage directory or is a member of a datastore cluster, the full path was required.
- 12
- Optional. Set to the path to the resource pool where dummy VMs for Storage Profile Based volume provisioning must be created.
- 13
- Type of SCSI controller the VMDK will be attached to the VM as.
- 14
- Set to the network port group for vSphere to access the node, which is called VM Network by default. This is the node host’s ExternalIP that is registered with Kubernetes.
Restart the OpenShift Container Platform host services:
# master-restart api # master-restart controllers # systemctl restart atomic-openshift-node
25.2.2.2. Manually configuring node hosts for vSphere
Perform the following on all node hosts.
Procedure
To configure the OpenShift Container Platform nodes for vSphere:
Edit the appropriate node configuration map and update the contents of the
kubeletArguments
section:kubeletArguments: cloud-provider: - "vsphere" cloud-config: - "/etc/origin/cloudprovider/vsphere.conf"
ImportantThe
nodeName
must match the VM name in vSphere in order for the cloud provider integration to work properly. The name must also be RFC1123 compliant.Restart the OpenShift Container Platform services on all nodes.
# systemctl restart atomic-openshift-node
25.2.2.3. Applying Configuration Changes
Start or restart OpenShift Container Platform services on all master and node hosts to apply your configuration changes, see Restarting OpenShift Container Platform services:
# master-restart api # master-restart controllers # systemctl restart atomic-openshift-node
Kubernetes architecture expects reliable endpoints from cloud providers. When a cloud provider is down, the kubelet prevents OpenShift Container Platform from restarting. If the underlying cloud provider endpoints are not reliable, do not install a cluster that uses the cloud provider integration. Install the cluster as if it is a bare metal environment. It is not recommended to toggle cloud provider integration on or off in an installed cluster. However, if that scenario is unavoidable, then complete the following process.
Switching from not using a cloud provider to using a cloud provider produces an error message. Adding the cloud provider tries to delete the node because the node switches from using the hostname as the externalID
(which would have been the case when no cloud provider was being used) to using the cloud provider’s instance-id
(which is what the cloud provider specifies). To resolve this issue:
- Log in to the CLI as a cluster administrator.
Check and back up existing node labels:
$ oc describe node <node_name> | grep -Poz '(?s)Labels.*\n.*(?=Taints)'
Delete the nodes:
$ oc delete node <node_name>
On each node host, restart the OpenShift Container Platform service.
# systemctl restart atomic-openshift-node
- Add back any labels on each node that you previously had.
25.3. Configuring OpenShift Container Platform to use vSphere storage
OpenShift Container Platform supports VMware vSphere’s Virtual Machine Disk (VMDK) volumes. You can provision your OpenShift Container Platform cluster with persistent storage using VMware vSphere. Some familiarity with Kubernetes and VMware vSphere is assumed.
OpenShift Container Platform creates the disk in vSphere and attaches the disk to the correct instance.
The OpenShift Container Platform persistent volume (PV) framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. vSphere VMDK volumes can be provisioned dynamically.
PVs are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. PV claims, however, are specific to a project or namespace and can be requested by users.
High availability of storage in the infrastructure is left to the underlying storage provider.
Prerequisites
Before creating PVs using vSphere, ensure your OpenShift Container Platform cluster meets the following requirements:
- OpenShift Container Platform must first be configured for vSphere.
- Each node host in the infrastructure must match the vSphere VM name.
- Each node host must be in the same resource group.
25.3.1. Dynamically Provisioning VMware vSphere volumes
Dynamically provisioning VMware vSphere volumes is the preferred provisioning method.
If you did not specify the
openshift_cloudprovider_kind=vsphere
andopenshift_vsphere_*
variables in the Ansible inventory file when you provisioned the cluster, you must manually create the followingStorageClass
to use thevsphere-volume
provisioner:$ oc get --export storageclass vsphere-standard -o yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: "vsphere-standard" 1 provisioner: kubernetes.io/vsphere-volume 2 parameters: diskformat: thin 3 datastore: "YourvSphereDatastoreName" 4 reclaimPolicy: Delete
After you request a PV, using the StorageClass shown in the previous step, OpenShift Container Platform automatically creates VMDK disks in the vSphere infrastructure. To verify that the disks were created, use the Datastore browser in vSphere.
NotevSphere-volume disks are
ReadWriteOnce
access mode, which means the volume can be mounted as read-write by a single node. See the Access modes section of the Architecture guide for more information.
25.3.2. Statically Provisioning VMware vSphere volumes
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. After ensuring OpenShift Container Platform is configured for vSphere, all that is required for OpenShift Container Platform and vSphere is a VM folder path, file system type, and the PersistentVolume
API.
25.3.2.1. Creating PersistentVolumes
Define a PV object definition, for example vsphere-pv.yaml:
apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 2Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: "[datastore1] volumes/myDisk" 4 fsType: ext4 5
- 1
- The name of the volume. This must be how it is identified by PV claims or from pods.
- 2
- The amount of storage allocated to this volume.
- 3
- The volume type being used. This example uses
vsphereVolume
. The label is used to mount a vSphere VMDK volume into pods. The contents of a volume are preserved when it is unmounted. The volume type supports VMFS and VSAN datastore. - 4
- The existing VMDK volume to use. You must enclose the datastore name in square brackets ([]) in the volume definition, as shown.
- 5
- The file system type to mount. For example,
ext4
,xfs
, or other file-systems.
ImportantChanging the value of the
fsType
parameter after the volume is formatted and provisioned can result in data loss and pod failure.Create the PV:
$ oc create -f vsphere-pv.yaml persistentvolume "pv0001" created
Verify that the PV was created:
$ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 2Gi RWO Available 2s
Now you can request storage using PV claims, which can now use your PV.
PV claims only exist in the user’s namespace and can only be referenced by a pod within that same namespace. Any attempt to access a PV from a different namespace causes the pod to fail.
25.3.2.2. Formatting VMware vSphere volumes
Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the fsType
parameter in the PV definition. If the device is not formatted with the file system, all data from the device is erased, and the device is automatically formatted with the given file system.
Because OpenShift Container Platform formats them before the first use, you can use unformatted vSphere volumes as PVs.
25.4. Configuring the OpenShift Container Platform registry for vSphere
25.4.1. Configuring the OpenShift Container Platform registry for vSphere using Ansible
Procedure
To configure the Ansible inventory for the registry to use a vSphere volume:
[OSEv3:vars] # vSphere Provider Configuration openshift_hosted_registry_storage_kind=vsphere 1 openshift_hosted_registry_storage_access_modes=['ReadWriteOnce'] 2 openshift_hosted_registry_storage_annotations=['volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume'] 3 openshift_hosted_registry_replicas=1 4
The brackets in the configuration file above are required.
25.4.2. Dynamically provisioning storage for OpenShift Container Platform registry
To use vSphere volume storage, edit the registry’s configuration file and mount to the registry pod.
Procedure
Create a new configuration file from the vSphere volume:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: vsphere-registry-storage annotations: volume.beta.kubernetes.io/storage-class: vsphere-standard spec: accessModes: - ReadWriteOnce resources: requests: storage: 30Gi
Create the file in OpenShift Container Platform:
$ oc create -f pvc-registry.yaml
Update the volume configuration to use the new PVC:
$ oc set volume dc docker-registry --add --name=registry-storage -t \ pvc --claim-name=vsphere-registry-storage --overwrite
Redeploy the registry to read the updated configuration:
$ oc rollout latest docker-registry -n default
Verify the volume has been assigned:
$ oc set volume dc docker-registry -n default
25.4.3. Manually provisioning storage for OpenShift Container Platform registry
Running the following commands manually creates storage, which is used to create storage for the registry if a StorageClass
is unavailable or not used.
# VMFS cd /vmfs/volumes/datastore1/ mkdir kubevols # Not needed but good hygiene # VSAN cd /vmfs/volumes/vsanDatastore/ /usr/lib/vmware/osfs/bin/osfs-mkdir kubevols # Needed cd kubevols vmkfstools -c 25G registry.vmdk
25.4.4. About Red Hat OpenShift Container Storage
Red Hat OpenShift Container Storage (RHOCS) is a provider of agnostic persistent storage for OpenShift Container Platform either in-house or in hybrid clouds. As a Red Hat storage solution, RHOCS is completely integrated with OpenShift Container Platform for deployment, management, and monitoring regardless if it is installed on OpenShift Container Platform (converged) or with OpenShift Container Platform (independent). OpenShift Container Storage is not limited to a single availability zone or node, which makes it likely to survive an outage. You can find complete instructions for using RHOCS in the RHOCS3.11 Deployment Guide.
25.5. Backup of persistent volumes
OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots.
To create a backup of PVs:
- Stop the application using the PV.
- Clone the persistent disk.
- Restart the application.
- Create a backup of the cloned disk.
- Delete the cloned disk.