이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 21. Configuring for VMware vSphere
21.1. Overview
You can configure OpenShift Container Platform to access VMware vSphere VMDK Volumes. This includes using VMware vSphere VMDK Volumes as persistent storage for application data.
The vSphere Cloud Provider allows using vSphere managed storage within OpenShift Container Platform and supports:
- Volumes
- Persistent Volumes
- Storage Classes and provisioning of volumes
21.2. Enabling VMware vSphere Cloud Provider
Enabling VMware vSphere requires installing the VMware Tools on each Node VM. See Installing VMware tools for more information.
To enable VMware vSphere cloud provider for OpenShift Container Platform:
- Create a VM folder and move OpenShift Container Platform Node VMs to this folder.
Verify that the Node VM names complies with the regex
[a-z](()?[0-9a-z])?(\.[a-z0-9](([-0-9a-z])?[0-9a-z])?)*
.ImportantVM Names can not:
- begin with numbers.
- have any capital letters.
-
have any special characters except
-
. - be shorter than three characters and longer than 63 characters.
Set the
disk.EnableUUID
parameter toTRUE
for each Node VM. This ensures that the VMDK always presents a consistent UUID to the VM, allowing the disk to be mounted properly. For every virtual machine node that will be participating in the cluster, follow the steps below using the GOVC tool:Set up the GOVC environment:
export GOVC_URL='vCenter IP OR FQDN' export GOVC_USERNAME='vCenter User' export GOVC_PASSWORD='vCenter Password' export GOVC_INSECURE=1
Find the Node VM paths:
govc ls /datacenter/vm/<vm-folder-name>
Set disk.EnableUUID to true for all VMs:
govc vm.change -e="disk.enableUUID=1" -vm='VM Path'
NoteIf OpenShift Container Platform node VMs are created from a template VM, then
disk.EnableUUID=1
can be set on the template VM. VMs cloned from this template, inherit this property.
Create and assign roles to the vSphere Cloud Provider user and vSphere entities. vSphere Cloud Provider requires the following privileges to interact with vCenter. See the vSphere Documentation Center for steps to create a custom role, user and role assignment.
Roles Privileges Entities Propagate to Children manage-k8s-node-vms
Resource.AssignVMToPool System.Anonymous System.Read System.View VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.RemoveDisk VirtualMachine.Inventory.Create VirtualMachine.Inventory.Delete
Cluster, Hosts, VM Folder
Yes
manage-k8s-volumes
Datastore.AllocateSpace Datastore.FileManagement System.Anonymous System.Read System.View
Datastore
No
k8s-system-read-and-spbm-profile-view
StorageProfile.View System.Anonymous System.Read System.View
vCenter
No
ReadOnly
System.Anonymous System.Read System.View
Datacenter, Datastore Cluster, Datastore Storage Folder
No
After enabling the vSphere Cloud Provider, node names are set to the VM names from the vCenter Inventory.
The openshift_hostname
variable must match the virtual machine name and its host name. The openshift_hostname
variable defines the nodeName
value in the node-config.yaml file. This value is compared to the nodeName
value determined by using the command uname -n
. In case of a mismatch, the native cloud integration for those providers will not work.
21.3. Configuring OpenShift Container Platform for vSphere using Ansible
You can configure OpenShift Container Platform for VMware vSphere (VCP) by modifying the Ansible inventory file during installation or after installation.
Procedure
Add the following section to the Ansible inventory file:
[OSEv3:vars] openshift_cloudprovider_kind=vsphere openshift_cloudprovider_vsphere_username=administrator@vsphere.local 1 openshift_cloudprovider_vsphere_password=<password> openshift_cloudprovider_vsphere_host=10.x.y.32 2 openshift_cloudprovider_vsphere_datacenter=<Datacenter> 3 openshift_cloudprovider_vsphere_datastore=<Datastore> 4
21.4. The VMware vSphere Configuration File
Configuring OpenShift Container Platform for VMware vSphere requires the /etc/origin/cloudprovider/vsphere.conf file, on each node host.
If the file does not exist, create it, and add the following:
[Global] 1 user = "myusername" 2 password = "mypassword" 3 port = "443" 4 insecure-flag = "1" 5 datacenter = "mydatacenter" 6 [VirtualCenter "1.2.3.4"] 7 user = "myvCenterusername" password = "password" [VirtualCenter "1.2.3.5"] port = "448" insecure-flag = "0" [Workspace] 8 server = "10.10.0.2" 9 datacenter = "mydatacenter" folder = "path/to/file" 10 datastore = "mydatastore" 11 resourcepool-path = "myresourcepoolpath" 12 [Disk] scsicontrollertype = pvscsi [Network] public-network = "VM Network" 13
- 1
- Any properties set in the
[Global]
section are used for all specified vcenters unless overridden by the settings in the individual[VirtualCenter]
sections. - 2
- vCenter username for the vSphere cloud provider.
- 3
- vCenter password for the specified user.
- 4
- Optional. Port number for the vCenter server. Defaults to port
443
. - 5
- Set to
1
if the vCenter uses a self-signed cert. - 6
- Name of the data center on which Node VMs are deployed.
- 7
- Override specific
[Global]
properties for this Virtual Center. Possible setting scan be[Port]
,[user]
,[insecure-flag]
,[datacenters]
. Any settings not specified are pulled from the[Global]
section. - 8
- Set any properties used for various vSphere Cloud Provider functionality. For example, dynamic provisioning, Storage Profile Based Volume provisioning, and others.
- 9
- IP Address or FQDN for the vCenter server.
- 10
- Path to the VM directory for node VMs.
- 11
- Set to the name of the datastore to use for provisioning volumes using the storage classes or dynamic provisioning. If the datastore is located in a storage directory or is a member of a datastore cluster, you must specify the full path.
- 12
- Optional. Set to the path to the resource pool where dummy VMs for Storage Profile Based volume provisioning should be created.
- 13
- Set to the network port group for vSphere to access the node, which is called VM Network by default. This is the node host’s ExternalIP that is registered with Kubernetes.
21.5. Configuring Masters
Edit or create the master configuration file on all masters (/etc/origin/master/master-config.yaml by default) and update the contents of the apiServerArguments
and controllerArguments
sections with the following:
kubernetesMasterConfig: admissionConfig: pluginConfig: {} apiServerArguments: cloud-provider: - "vsphere" cloud-config: - "/etc/origin/cloudprovider/vsphere.conf" controllerArguments: cloud-provider: - "vsphere" cloud-config: - "/etc/origin/cloudprovider/vsphere.conf"
When triggering a containerized installation, only the /etc/origin and /var/lib/origin directories are mounted to the master and node container. Therefore, master-config.yaml must be in /etc/origin/master rather than /etc/.
21.6. Configuring Nodes
Edit or create the node configuration file on all nodes (/etc/origin/node/node-config.yaml by default) and update the contents of the
kubeletArguments
section:kubeletArguments: cloud-provider: - "vsphere" cloud-config: - "/etc/origin/cloudprovider/vsphere.conf"
ImportantWhen triggering a containerized installation, only the /etc/origin and /var/lib/origin directories are mounted to the master and node container. Therefore, node-config.yaml must be in /etc/origin/node rather than /etc/.
21.7. Applying Configuration Changes
Start or restart OpenShift Container Platform services on all master and node hosts to apply your configuration changes, see Restarting OpenShift Container Platform services:
# systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers # systemctl restart atomic-openshift-node
Switching from not using a cloud provider to using a cloud provider produces an error message. Adding the cloud provider tries to delete the node because the node switches from using the hostname as the externalID
(which would have been the case when no cloud provider was being used) to using the cloud provider’s instance-id
(which is what the cloud provider specifies). To resolve this issue:
- Log in to the CLI as a cluster administrator.
Check and back up existing node labels:
$ oc describe node <node_name> | grep -Poz '(?s)Labels.*\n.*(?=Taints)'
Delete the nodes:
$ oc delete node <node_name>
On each node host, restart the OpenShift Container Platform service.
# systemctl restart atomic-openshift-node
- Add back any labels on each node that you previously had.
21.8. Backup of Persistent Volumes
OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots.
To create a backup of PVs:
- Stop the application using the PV.
- Clone the persistent disk.
- Restart the application.
- Create a backup of the cloned disk.
- Delete the cloned disk.