Search

Chapter 14. Installing on vSphere

download PDF

The Assisted Installer integrates the OpenShift Container Platform cluster with the vSphere platform, which exposes the Machine API to vSphere and enables autoscaling.

14.1. Adding hosts on vSphere

You can add hosts to the Assisted Installer cluster using the online vSphere client or the govc vSphere CLI tool. The following procedure demonstrates adding hosts with the govc CLI tool. To use the online vSphere Client, refer to the documentation for vSphere.

To add hosts on vSphere with the vSphere govc CLI, generate the discovery image ISO from the Assisted Installer. The minimal discovery image ISO is the default setting. This image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size.

After this is complete, you must create an image for the vSphere platform and create the vSphere virtual machines.

Prerequisites

  • You are using vSphere 7.0.2 or higher.
  • You have the vSphere govc CLI tool installed and configured.
  • You have set clusterSet disk.enableUUID to true in vSphere.
  • You have created a cluster in the Assisted Installer web console, or
  • You have created an Assisted Installer cluster profile and infrastructure environment with the API.
  • You have exported your infrastructure environment ID in your shell as $INFRA_ENV_ID.

Procedure

  1. Configure the discovery image if you want it to boot with an ignition file.
  2. In Cluster details, select vSphere from the Integrate with external partner platforms dropdown list. The Include custom manifest checkbox is optional.
  3. In Host discovery, click the Add hosts button and select the provisioning type.
  4. Add an SSH public key so that you can connect to the vSphere VMs as the core user. Having a login to the cluster hosts can provide you with debugging information during the installation.

    1. If you do not have an existing SSH key pair on your local machine, follow the steps in Generating a key pair for cluster node SSH access.
    2. In the SSH public key field, click Browse to upload the id_rsa.pub file containing the SSH public key. Alternatively, drag and drop the file into the field from the file manager. To see the file in the file manager, select Show hidden files in the menu.
  5. Select the required discovery image ISO.

    Note

    Minimal image file: Provision with virtual media downloads a smaller image that will fetch the data needed to boot.

  6. In Networking, select Cluster-managed networking or User-managed networking:

    1. Optional: If the cluster hosts are behind a firewall that requires the use of a proxy, select Configure cluster-wide proxy settings. Enter the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server.

      Note

      The proxy username and password must be URL-encoded.

    2. Optional: If the cluster hosts are in a network with a re-encrypting man-in-the-middle (MITM) proxy or the cluster needs to trust certificates for other purposes such as container image registries, select Configure cluster-wide trusted certificates and add the additional certificates.
    3. Optional: Configure the discovery image if you want to boot it with an ignition file. For more information, see Additional Resources.
  7. Click Generate Discovery ISO.
  8. Copy the Discovery ISO URL.
  9. Download the discovery ISO:

    $ wget - O vsphere-discovery-image.iso <discovery_url>

    Replace <discovery_url> with the Discovery ISO URL from the preceding step.

  10. On the command line, power off and delete any preexisting virtual machines:

    $ for VM in $(/usr/local/bin/govc ls /<datacenter>/vm/<folder_name>)
    do
     	/usr/local/bin/govc vm.power -off $VM
     	/usr/local/bin/govc vm.destroy $VM
    done

    Replace <datacenter> with the name of the data center. Replace <folder_name> with the name of the VM inventory folder.

  11. Remove preexisting ISO images from the data store, if there are any:

    $ govc datastore.rm -ds <iso_datastore> <image>

    Replace <iso_datastore> with the name of the data store. Replace image with the name of the ISO image.

  12. Upload the Assisted Installer discovery ISO:

    $ govc datastore.upload -ds <iso_datastore>  vsphere-discovery-image.iso

    Replace <iso_datastore> with the name of the data store.

    Note

    All nodes in the cluster must boot from the discovery image.

  13. Boot three control plane nodes:

    $ govc vm.create -net.adapter <network_adapter_type> \
                     -disk.controller <disk_controller_type> \
                     -pool=<resource_pool> \
                     -c=16 \
                     -m=32768 \
                     -disk=120GB \
                     -disk-datastore=<datastore_file> \
                     -net.address="<nic_mac_address>" \
                     -iso-datastore=<iso_datastore> \
                     -iso="vsphere-discovery-image.iso" \
                     -folder="<inventory_folder>" \
                     <hostname>.<cluster_name>.example.com

    See vm.create for details.

    Note

    The foregoing example illustrates the minimum required resources for control plane nodes.

  14. Boot at least two worker nodes:

    $ govc vm.create -net.adapter <network_adapter_type> \
                     -disk.controller <disk_controller_type> \
                     -pool=<resource_pool> \
                     -c=4 \
                     -m=8192 \
                     -disk=120GB \
                     -disk-datastore=<datastore_file> \
                     -net.address="<nic_mac_address>" \
                     -iso-datastore=<iso_datastore> \
                     -iso="vsphere-discovery-image.iso" \
                     -folder="<inventory_folder>" \
                     <hostname>.<cluster_name>.example.com

    See vm.create for details.

    Note

    The foregoing example illustrates the minimum required resources for worker nodes.

  15. Ensure the VMs are running:

    $  govc ls /<datacenter>/vm/<folder_name>

    Replace <datacenter> with the name of the data center. Replace <folder_name> with the name of the VM inventory folder.

  16. After 2 minutes, shut down the VMs:

    $ for VM in $(govc ls /<datacenter>/vm/<folder_name>)
    do
         govc vm.power -s=true $VM
    done

    Replace <datacenter> with the name of the data center. Replace <folder_name> with the name of the VM inventory folder.

  17. Set the disk.enableUUID setting to TRUE:

    $ for VM in $(govc ls /<datacenter>/vm/<folder_name>)
    do
         govc vm.change -vm $VM -e disk.enableUUID=TRUE
    done

    Replace <datacenter> with the name of the data center. Replace <folder_name> with the name of the VM inventory folder.

    Note

    You must set disk.enableUUID to TRUE on all of the nodes to enable autoscaling with vSphere.

  18. Restart the VMs:

    $  for VM in $(govc ls /<datacenter>/vm/<folder_name>)
    do
         govc vm.power -on=true $VM
    done

    Replace <datacenter> with the name of the data center. Replace <folder_name> with the name of the VM inventory folder.

  19. Return to the Assisted Installer user interface and wait until the Assisted Installer discovers the hosts and each of them have a Ready status.
  20. Select roles if needed.
  21. In Networking, clear the Allocate IPs via DHCP server checkbox.
  22. Set the API VIP address.
  23. Set the Ingress VIP address.
  24. Continue with the installation procedure.

Additional resources

14.2. vSphere postinstallation configuration using the CLI

After installing an OpenShift Container Platform cluster using the Assisted Installer on vSphere with the platform integration feature enabled, you must update the following vSphere configuration settings manually:

  • vCenter username
  • vCenter password
  • vCenter address
  • vCenter cluster
  • Data center
  • Data store
  • Folder

Prerequisites

  • The Assisted Installer has finished installing the cluster successfully.
  • The cluster is connected to console.redhat.com.

Procedure

  1. Generate a base64-encoded username and password for vCenter:

    $ echo -n "<vcenter_username>" | base64 -w0

    Replace <vcenter_username> with your vCenter username.

    $ echo -n "<vcenter_password>" | base64 -w0

    Replace <vcenter_password> with your vCenter password.

  2. Backup the vSphere credentials:

    $ oc get secret vsphere-creds -o yaml -n kube-system > creds_backup.yaml
  3. Edit the vSphere credentials:

    $ cp creds_backup.yaml vsphere-creds.yaml
    $ vi vsphere-creds.yaml
    apiVersion: v1
    data:
      <vcenter_address>.username: <vcenter_username_encoded>
      <vcenter_address>.password: <vcenter_password_encoded>
    kind: Secret
    metadata:
      annotations:
        cloudcredential.openshift.io/mode: passthrough
      creationTimestamp: "2022-01-25T17:39:50Z"
      name: vsphere-creds
      namespace: kube-system
      resourceVersion: "2437"
      uid: 06971978-e3a5-4741-87f9-2ca3602f2658
    type: Opaque

    Replace <vcenter_address> with the vCenter address. Replace <vcenter_username_encoded> with the base64-encoded version of your vSphere username. Replace <vcenter_password_encoded> with the base64-encoded version of your vSphere password.

  4. Replace the vSphere credentials:

    $ oc replace -f vsphere-creds.yaml
  5. Redeploy the kube-controller-manager pods:

    $ oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
  6. Backup the vSphere cloud provider configuration:

    $ oc get cm cloud-provider-config -o yaml -n openshift-config > cloud-provider-config_backup.yaml
  7. Edit the cloud provider configuration:

    $ cloud-provider-config_backup.yaml cloud-provider-config.yaml
    $ vi cloud-provider-config.yaml
    apiVersion: v1
    data:
      config: |
        [Global]
        secret-name = "vsphere-creds"
        secret-namespace = "kube-system"
        insecure-flag = "1"
    
        [Workspace]
        server = "<vcenter_address>"
        datacenter = "<datacenter>"
        default-datastore = "<datastore>"
        folder = "/<datacenter>/vm/<folder>"
    
        [VirtualCenter "<vcenter_address>"]
        datacenters = "<datacenter>"
    kind: ConfigMap
    metadata:
      creationTimestamp: "2022-01-25T17:40:49Z"
      name: cloud-provider-config
      namespace: openshift-config
      resourceVersion: "2070"
      uid: 80bb8618-bf25-442b-b023-b31311918507

    Replace <vcenter_address> with the vCenter address. Replace <datacenter> with the name of the data center. Replace <datastore> with the name of the data store. Replace <folder> with the folder containing the cluster VMs.

  8. Apply the cloud provider configuration:

    $ oc apply -f cloud-provider-config.yaml
  9. Taint the nodes with the uninitialized taint:

    Important

    Follow steps 9 through 12 if you are installing OpenShift Container Platform 4.13 or later.

    1. Identify the nodes to taint:

      $ oc get nodes
    2. Run the following command for each node:

      $ oc adm taint node <node_name> node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule

      Replace <node_name> with the name of the node.

    Example

    $ oc get nodes
    NAME                STATUS   ROLES                  AGE   VERSION
    master-0   Ready    control-plane,master   45h   v1.26.3+379cd9f
    master-1   Ready    control-plane,master   45h   v1.26.3+379cd9f
    worker-0   Ready    worker                 45h   v1.26.3+379cd9f
    worker-1   Ready    worker                 45h   v1.26.3+379cd9f
    master-2   Ready    control-plane,master   45h   v1.26.3+379cd9f
    
    $ oc adm taint node master-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
    $ oc adm taint node master-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
    $ oc adm taint node master-2 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
    $ oc adm taint node worker-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
    $ oc adm taint node worker-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule

  10. Back up the infrastructures configuration:

    $ oc get infrastructures.config.openshift.io -o yaml > infrastructures.config.openshift.io.yaml.backup
  11. Edit the infrastructures configuration:

    $ cp infrastructures.config.openshift.io.yaml.backup infrastructures.config.openshift.io.yaml
    $ vi infrastructures.config.openshift.io.yaml
    apiVersion: v1
    items:
    - apiVersion: config.openshift.io/v1
      kind: Infrastructure
      metadata:
        creationTimestamp: "2022-05-07T10:19:55Z"
        generation: 1
        name: cluster
        resourceVersion: "536"
        uid: e8a5742c-6d15-44e6-8a9e-064b26ab347d
      spec:
        cloudConfig:
          key: config
          name: cloud-provider-config
        platformSpec:
          type: VSphere
          vsphere:
            failureDomains:
            - name: assisted-generated-failure-domain
              region: assisted-generated-region
              server: <vcenter_address>
              topology:
                computeCluster: /<data_center>/host/<vcenter_cluster>
                datacenter: <data_center>
                datastore: /<data_center>/datastore/<datastore>
                folder: "/<data_center>/path/to/folder"
                networks:
                - "VM Network"
                resourcePool: /<data_center>/host/<vcenter_cluster>/Resources
              zone: assisted-generated-zone
            nodeNetworking:
              external: {}
              internal: {}
            vcenters:
            - datacenters:
              - <data_center>
              server: <vcenter_address>
    
    kind: List
    metadata:
      resourceVersion: ""

    Replace <vcenter_address> with your vCenter address. Replace <datacenter> with the name of your vCenter data center. Replace <datastore> with the name of your vCenter data store. Replace <folder> with the folder containing the cluster VMs. Replace <vcenter_cluster> with the vSphere vCenter cluster where OpenShift Container Platform is installed.

  12. Apply the infrastructures configuration:

    $ oc apply -f infrastructures.config.openshift.io.yaml --overwrite=true

14.3. vSphere postinstallation configuration using the web console

After installing an OpenShift Container Platform cluster by using the Assisted Installer on vSphere with the platform integration feature enabled, you must update the following vSphere configuration settings manually:

  • vCenter address
  • vCenter cluster
  • vCenter username
  • vCenter password
  • Data center
  • Default data store
  • Virtual machine folder

Prerequisites

  • The Assisted Installer has finished installing the cluster successfully.
  • The cluster is connected to console.redhat.com.

Procedure

  1. In the Administrator perspective, navigate to Home Overview.
  2. Under Status, click vSphere connection to open the vSphere connection configuration wizard.
  3. In the vCenter field, enter the network address of the vSphere vCenter server. This can be either a domain name or an IP address. It appears in the vSphere web client URL; for example https://[your_vCenter_address]/ui.
  4. In the vCenter cluster field, enter the name of the vSphere vCenter cluster where OpenShift Container Platform is installed.

    Important

    This step is mandatory if you installed OpenShift Container Platform 4.13 or later.

  5. In the Username field, enter your vSphere vCenter username.
  6. In the Password field, enter your vSphere vCenter password.

    Warning

    The system stores the username and password in the vsphere-creds secret in the kube-system namespace of the cluster. An incorrect vCenter username or password makes the cluster nodes unschedulable.

  7. In the Datacenter field, enter the name of the vSphere data center that contains the virtual machines used to host the cluster; for example, SDDC-Datacenter.
  8. In the Default data store field, enter the vSphere data store that stores the persistent data volumes; for example, /SDDC-Datacenter/datastore/datastorename.

    Warning

    Updating the vSphere data center or default data store after the configuration has been saved detaches any active vSphere PersistentVolumes.

  9. In the Virtual Machine Folder field, enter the data center folder that contains the virtual machine of the cluster; for example, /SDDC-Datacenter/vm/ci-ln-hjg4vg2-c61657-t2gzr. For the OpenShift Container Platform installation to succeed, all virtual machines comprising the cluster must be located in a single data center folder.
  10. Click Save Configuration. This updates the cloud-provider-config file in the openshift-config namespace, and starts the configuration process.
  11. Reopen the vSphere connection configuration wizard and expand the Monitored operators panel. Check that the status of the operators is either Progressing or Healthy.

Verification

The connection configuration process updates operator statuses and control plane nodes. It takes approximately an hour to complete. During the configuration process, the nodes will reboot. Previously bound PersistentVolumeClaims objects might become disconnected.

Follow the steps below to monitor the configuration process.

  1. Check that the configuration process completed successfully:

    1. In the Administrator perspective, navigate to Home > Overview.
    2. Under Status click Operators. Wait for all operator statuses to change from Progressing to All succeeded. A Failed status indicates that the configuration failed.
    3. Under Status, click Control Plane. Wait for the response rate of all Control Pane components to return to 100%. A Failed control plane component indicates that the configuration failed.

    A failure indicates that at least one of the connection settings is incorrect. Change the settings in the vSphere connection configuration wizard and save the configuration again.

  2. Check that you are able to bind PersistentVolumeClaims objects by performing the following steps:

    1. Create a StorageClass object using the following YAML:

      kind: StorageClass
      apiVersion: storage.k8s.io/v1
      metadata:
       name: vsphere-sc
      provisioner: kubernetes.io/vsphere-volume
      parameters:
       datastore: YOURVCENTERDATASTORE
       diskformat: thin
      reclaimPolicy: Delete
      volumeBindingMode: Immediate
    2. Create a PersistentVolumeClaims object using the following YAML:

      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
       name: test-pvc
       namespace: openshift-config
       annotations:
         volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume
       finalizers:
         - kubernetes.io/pvc-protection
      spec:
       accessModes:
         - ReadWriteOnce
       resources:
         requests:
          storage: 10Gi
       storageClassName: vsphere-sc
       volumeMode: Filesystem

    For instructions, see Dynamic provisioning in the OpenShift Container Platform documentation. To troubleshoot a PersistentVolumeClaims object, navigate to Storage PersistentVolumeClaims in the Administrator perspective of the OpenShift Container Platform web console.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.