此内容没有您所选择的语言版本。

Chapter 20. Configuring for VMware vSphere


20.1. Overview

OpenShift Container Platform can be configured to access VMware vSphere VMDK Volumes, including using VMware vSphere VMDK Volumes as persistent storage for application data.

The vSphere Cloud Provider allows using vSphere managed storage within OpenShift Container Platform and supports:

  • Volumes,
  • Persistent Volumes, and
  • Storage Classes and provisioning of volumes.

20.2. Enabling VMware vSphere Cloud Provider

Important

Enabling VMware vSphere requires installing the VMware Tools on each Node VM. See Installing VMware tools for more information.

To enable VMware vSphere cloud provider for OpenShift Container Platform:

  1. Create a VM folder and move OpenShift Container Platform Node VMs to this folder.
  2. Verify that the Node VM names complies with the regex [a-z](()?[0-9a-z])?(\.[a-z0-9](([-0-9a-z])?[0-9a-z])?)*.

    Important

    VM Names can not:

    • begin with numbers.
    • have any capital letters.
    • have any special characters except -.
    • be shorter than three characters and longer than 63 characters.
  3. Set the disk.EnableUUID parameter to TRUE for each Node VM. This ensures that the VMDK always presents a consistent UUID to the VM, allowing the disk to be mounted properly. For every virtual machine node that will be participating in the cluster, follow the steps below using the GOVC tool:

    1. Set up the GOVC environment:

      export GOVC_URL='vCenter IP OR FQDN'
      export GOVC_USERNAME='vCenter User'
      export GOVC_PASSWORD='vCenter Password'
      export GOVC_INSECURE=1
    2. Find the Node VM paths:

      govc ls /datacenter/vm/<vm-folder-name>
    3. Set disk.EnableUUID to true for all VMs:

      govc vm.change -e="disk.enableUUID=1" -vm='VM Path'
      Note

      If OpenShift Container Platform node VMs are created from a template VM, then disk.EnableUUID=1 can be set on the template VM. VMs cloned from this template, inherit this property.

  4. Create and assign roles to the vSphere Cloud Provider user and vSphere entities. vSphere Cloud Provider requires the following privileges to interact with vCenter. See the vSphere Documentation Center for steps to create a custom role, user and role assignment.

    RolesPrivilegesEntitiesPropagate to Children

    manage-k8s-node-vms

    Resource.AssignVMToPool System.Anonymous System.Read System.View VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.RemoveDisk VirtualMachine.Inventory.Create VirtualMachine.Inventory.Delete

    Cluster, Hosts, VM Folder

    Yes

    manage-k8s-volumes

    Datastore.AllocateSpace Datastore.FileManagement System.Anonymous System.Read System.View

    Datastore

    No

    k8s-system-read-and-spbm-profile-view

    StorageProfile.View System.Anonymous System.Read System.View

    vCenter

    No

    ReadOnly

    System.Anonymous System.Read System.View

    Datacenter, Datastore Cluster, Datastore Storage Folder

    No

Note

After enabling the vSphere Cloud Provider, Node names are set to the VM names from the vCenter Inventory.

Warning

The openshift_hostname variable must match the virtual machine name and its host name. The openshift_hostname variable defines the nodeName value in the node-config.yaml file. This value is compared to the nodeName value determined by using the command uname -n. In case of a mismatch, the native cloud integration for those providers will not work.

20.3. The VMware vSphere Configuration File

Configuring OpenShift Container Platform for VMware vSphere requires the /etc/origin/cloudprovider/vsphere.conf file, on each node host.

Important

If you are upgrading from OpenShift Container Platform version 3.6 to a newer version, place the vSphere Configuration (vsphere.conf) file in both /etc/vsphere/ and the /etc/origin/cloudprovider/ directories.

If the file does not exist, create it, and add the following:

[Global]
        user = "username" 1
        password = "password" 2
        server = "10.10.0.2" 3
        port = "443" 4
        insecure-flag = "1" 5
        datacenter = "datacenter-name" 6
        datastore = "datastore-name" 7
        working-dir = "vm-folder-path" 8
        vm-uuid = "vm-uuid" 9
[Disk]
    scsicontrollertype = pvscsi
1
vCenter username for the vSphere cloud provider.
2
vCenter password for the specified user.
3
IP Address or FQDN for the vCenter server.
4
(Optional) Port number for the vCenter server. Defaults to port 443.
5
Set to 1 if the vCenter uses a self-signed cert.
6
Name of the data center on which Node VMs are deployed.
7
Name of the datastore to use for provisioning volumes using the storage classes or dynamic provisioning. If datastore is located in a storage folder or datastore is a member of datastore cluster, specify the full datastore path. Verify that vSphere Cloud Provider user has the read privilege set on the datastore cluster or storage folder to be able to find datastore.
8
(Optional) The vCenter VM folder path in which the node VMs are located. It can be set to an empty path(working-dir = ""), if Node VMs are located in the root VM folder.
9
(Optional) VM Instance UUID of the Node VM. It can be set to empty (vm-uuid = ""). If this is set to empty, this is retrieved from /sys/class/dmi/id/product_serial file on virtual machine (requires root access).

20.4. Configuring Masters

Edit or create the master configuration file on all masters (/etc/origin/master/master-config.yaml by default) and update the contents of the apiServerArguments and controllerArguments sections with the following:

kubernetesMasterConfig:
  admissionConfig:
    pluginConfig:
      {}
  apiServerArguments:
    cloud-provider:
    - "vsphere"
    cloud-config:
    - "/etc/origin/cloudprovider/vsphere.conf"
  controllerArguments:
    cloud-provider:
    - "vsphere"
    cloud-config:
    - "/etc/origin/cloudprovider/vsphere.conf"
Important

When triggering a containerized installation, only the /etc/origin and /var/lib/origin directories are mounted to the master and node container. Therefore, master-config.yaml must be in /etc/origin/master rather than /etc/.

20.5. Configuring Nodes

  1. Edit or create the node configuration file on all nodes (/etc/origin/node/node-config.yaml by default) and update the contents of the kubeletArguments section:

    kubeletArguments:
      cloud-provider:
        - "vsphere"
      cloud-config:
        - "/etc/origin/cloudprovider/vsphere.conf"
    Important

    When triggering a containerized installation, only the /etc/origin and /var/lib/origin directories are mounted to the master and node container. Therefore, node-config.yaml must be in /etc/origin/node rather than /etc/.

20.6. Applying Configuration Changes

Start or restart OpenShift Container Platform services on all master and node hosts to apply your configuration changes, see Restarting OpenShift Container Platform services:

# systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
# systemctl restart atomic-openshift-node

Switching from not using a cloud provider to using a cloud provider produces an error message. Adding the cloud provider tries to delete the node because the node switches from using the hostname as the externalID (which would have been the case when no cloud provider was being used) to using the cloud provider’s instance-id (which is what the cloud provider specifies). To resolve this issue:

  1. Log in to the CLI as a cluster administrator.
  2. Check and back up existing node labels:

    $ oc describe node <node_name> | grep -Poz '(?s)Labels.*\n.*(?=Taints)'
  3. Delete the nodes:

    $ oc delete node <node_name>
  4. On each node host, restart the OpenShift Container Platform service.

    # systemctl restart atomic-openshift-node
  5. Add back any labels on each node that you previously had.

20.7. Backup of Persistent Volumes

OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots.

To create a backup of PVs:

  1. Stop the application using the PV.
  2. Clone the persistent disk.
  3. Restart the application.
  4. Create a backup of the cloned disk.
  5. Delete the cloned disk.
Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.