Chapter 14. Installing on vSphere
The Assisted Installer integrates the OpenShift Container Platform cluster with the vSphere platform, which exposes the Machine API to vSphere and enables autoscaling.
14.1. Adding hosts on vSphere
You can add hosts to the Assisted Installer cluster using the online vSphere client or the govc
vSphere CLI tool. The following procedure demonstrates adding hosts with the govc
CLI tool. To use the online vSphere Client, refer to the documentation for vSphere.
To add hosts on vSphere with the vSphere govc
CLI, generate the discovery image ISO from the Assisted Installer. The minimal discovery image ISO is the default setting. This image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size.
After this is complete, you must create an image for the vSphere platform and create the vSphere virtual machines.
Prerequisites
- You are using vSphere 7.0.2 or higher.
-
You have the vSphere
govc
CLI tool installed and configured. -
You have set
clusterSet disk.enableUUID
to true in vSphere. - You have created a cluster in the Assisted Installer web console, or
- You have created an Assisted Installer cluster profile and infrastructure environment with the API.
-
You have exported your infrastructure environment ID in your shell as
$INFRA_ENV_ID
.
Procedure
- Configure the discovery image if you want it to boot with an ignition file.
- In Cluster details, select vSphere from the Integrate with external partner platforms dropdown list. The Include custom manifest checkbox is optional.
- In Host discovery, click the Add hosts button and select the provisioning type.
Add an SSH public key so that you can connect to the vSphere VMs as the
core
user. Having a login to the cluster hosts can provide you with debugging information during the installation.- If you do not have an existing SSH key pair on your local machine, follow the steps in Generating a key pair for cluster node SSH access.
-
In the SSH public key field, click Browse to upload the
id_rsa.pub
file containing the SSH public key. Alternatively, drag and drop the file into the field from the file manager. To see the file in the file manager, select Show hidden files in the menu.
Select the required discovery image ISO.
NoteMinimal image file: Provision with virtual media downloads a smaller image that will fetch the data needed to boot.
In Networking, select Cluster-managed networking or User-managed networking:
Optional: If the cluster hosts are behind a firewall that requires the use of a proxy, select Configure cluster-wide proxy settings. Enter the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server.
NoteThe proxy username and password must be URL-encoded.
- Optional: If the cluster hosts are in a network with a re-encrypting man-in-the-middle (MITM) proxy or the cluster needs to trust certificates for other purposes such as container image registries, select Configure cluster-wide trusted certificates and add the additional certificates.
- Optional: Configure the discovery image if you want to boot it with an ignition file. For more information, see Additional Resources.
- Click Generate Discovery ISO.
- Copy the Discovery ISO URL.
Download the discovery ISO:
$ wget - O vsphere-discovery-image.iso <discovery_url>
Replace
<discovery_url>
with the Discovery ISO URL from the preceding step.On the command line, power off and delete any preexisting virtual machines:
$ for VM in $(/usr/local/bin/govc ls /<datacenter>/vm/<folder_name>) do /usr/local/bin/govc vm.power -off $VM /usr/local/bin/govc vm.destroy $VM done
Replace
<datacenter>
with the name of the data center. Replace<folder_name>
with the name of the VM inventory folder.Remove preexisting ISO images from the data store, if there are any:
$ govc datastore.rm -ds <iso_datastore> <image>
Replace
<iso_datastore>
with the name of the data store. Replaceimage
with the name of the ISO image.Upload the Assisted Installer discovery ISO:
$ govc datastore.upload -ds <iso_datastore> vsphere-discovery-image.iso
Replace
<iso_datastore>
with the name of the data store.NoteAll nodes in the cluster must boot from the discovery image.
Boot three control plane nodes:
$ govc vm.create -net.adapter <network_adapter_type> \ -disk.controller <disk_controller_type> \ -pool=<resource_pool> \ -c=16 \ -m=32768 \ -disk=120GB \ -disk-datastore=<datastore_file> \ -net.address="<nic_mac_address>" \ -iso-datastore=<iso_datastore> \ -iso="vsphere-discovery-image.iso" \ -folder="<inventory_folder>" \ <hostname>.<cluster_name>.example.com
See vm.create for details.
NoteThe foregoing example illustrates the minimum required resources for control plane nodes.
Boot at least two worker nodes:
$ govc vm.create -net.adapter <network_adapter_type> \ -disk.controller <disk_controller_type> \ -pool=<resource_pool> \ -c=4 \ -m=8192 \ -disk=120GB \ -disk-datastore=<datastore_file> \ -net.address="<nic_mac_address>" \ -iso-datastore=<iso_datastore> \ -iso="vsphere-discovery-image.iso" \ -folder="<inventory_folder>" \ <hostname>.<cluster_name>.example.com
See vm.create for details.
NoteThe foregoing example illustrates the minimum required resources for worker nodes.
Ensure the VMs are running:
$ govc ls /<datacenter>/vm/<folder_name>
Replace
<datacenter>
with the name of the data center. Replace<folder_name>
with the name of the VM inventory folder.After 2 minutes, shut down the VMs:
$ for VM in $(govc ls /<datacenter>/vm/<folder_name>) do govc vm.power -s=true $VM done
Replace
<datacenter>
with the name of the data center. Replace<folder_name>
with the name of the VM inventory folder.Set the
disk.enableUUID
setting toTRUE
:$ for VM in $(govc ls /<datacenter>/vm/<folder_name>) do govc vm.change -vm $VM -e disk.enableUUID=TRUE done
Replace
<datacenter>
with the name of the data center. Replace<folder_name>
with the name of the VM inventory folder.NoteYou must set
disk.enableUUID
toTRUE
on all of the nodes to enable autoscaling with vSphere.Restart the VMs:
$ for VM in $(govc ls /<datacenter>/vm/<folder_name>) do govc vm.power -on=true $VM done
Replace
<datacenter>
with the name of the data center. Replace<folder_name>
with the name of the VM inventory folder.-
Return to the Assisted Installer user interface and wait until the Assisted Installer discovers the hosts and each of them have a
Ready
status. - Select roles if needed.
- In Networking, clear the Allocate IPs via DHCP server checkbox.
- Set the API VIP address.
- Set the Ingress VIP address.
- Continue with the installation procedure.
Additional resources
14.2. vSphere postinstallation configuration using the CLI
After installing an OpenShift Container Platform cluster using the Assisted Installer on vSphere with the platform integration feature enabled, you must update the following vSphere configuration settings manually:
- vCenter username
- vCenter password
- vCenter address
- vCenter cluster
- Data center
- Data store
- Folder
Prerequisites
- The Assisted Installer has finished installing the cluster successfully.
- The cluster is connected to console.redhat.com.
Procedure
Generate a base64-encoded username and password for vCenter:
$ echo -n "<vcenter_username>" | base64 -w0
Replace
<vcenter_username>
with your vCenter username.$ echo -n "<vcenter_password>" | base64 -w0
Replace
<vcenter_password>
with your vCenter password.Backup the vSphere credentials:
$ oc get secret vsphere-creds -o yaml -n kube-system > creds_backup.yaml
Edit the vSphere credentials:
$ cp creds_backup.yaml vsphere-creds.yaml
$ vi vsphere-creds.yaml
apiVersion: v1 data: <vcenter_address>.username: <vcenter_username_encoded> <vcenter_address>.password: <vcenter_password_encoded> kind: Secret metadata: annotations: cloudcredential.openshift.io/mode: passthrough creationTimestamp: "2022-01-25T17:39:50Z" name: vsphere-creds namespace: kube-system resourceVersion: "2437" uid: 06971978-e3a5-4741-87f9-2ca3602f2658 type: Opaque
Replace
<vcenter_address>
with the vCenter address. Replace<vcenter_username_encoded>
with the base64-encoded version of your vSphere username. Replace<vcenter_password_encoded>
with the base64-encoded version of your vSphere password.Replace the vSphere credentials:
$ oc replace -f vsphere-creds.yaml
Redeploy the kube-controller-manager pods:
$ oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
Backup the vSphere cloud provider configuration:
$ oc get cm cloud-provider-config -o yaml -n openshift-config > cloud-provider-config_backup.yaml
Edit the cloud provider configuration:
$ cloud-provider-config_backup.yaml cloud-provider-config.yaml
$ vi cloud-provider-config.yaml
apiVersion: v1 data: config: | [Global] secret-name = "vsphere-creds" secret-namespace = "kube-system" insecure-flag = "1" [Workspace] server = "<vcenter_address>" datacenter = "<datacenter>" default-datastore = "<datastore>" folder = "/<datacenter>/vm/<folder>" [VirtualCenter "<vcenter_address>"] datacenters = "<datacenter>" kind: ConfigMap metadata: creationTimestamp: "2022-01-25T17:40:49Z" name: cloud-provider-config namespace: openshift-config resourceVersion: "2070" uid: 80bb8618-bf25-442b-b023-b31311918507
Replace
<vcenter_address>
with the vCenter address. Replace<datacenter>
with the name of the data center. Replace<datastore>
with the name of the data store. Replace<folder>
with the folder containing the cluster VMs.Apply the cloud provider configuration:
$ oc apply -f cloud-provider-config.yaml
Taint the nodes with the
uninitialized
taint:ImportantFollow steps 9 through 12 if you are installing OpenShift Container Platform 4.13 or later.
Identify the nodes to taint:
$ oc get nodes
Run the following command for each node:
$ oc adm taint node <node_name> node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
Replace
<node_name>
with the name of the node.
Example
$ oc get nodes NAME STATUS ROLES AGE VERSION master-0 Ready control-plane,master 45h v1.26.3+379cd9f master-1 Ready control-plane,master 45h v1.26.3+379cd9f worker-0 Ready worker 45h v1.26.3+379cd9f worker-1 Ready worker 45h v1.26.3+379cd9f master-2 Ready control-plane,master 45h v1.26.3+379cd9f $ oc adm taint node master-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule $ oc adm taint node master-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule $ oc adm taint node master-2 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule $ oc adm taint node worker-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule $ oc adm taint node worker-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
Back up the infrastructures configuration:
$ oc get infrastructures.config.openshift.io -o yaml > infrastructures.config.openshift.io.yaml.backup
Edit the infrastructures configuration:
$ cp infrastructures.config.openshift.io.yaml.backup infrastructures.config.openshift.io.yaml
$ vi infrastructures.config.openshift.io.yaml
apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: creationTimestamp: "2022-05-07T10:19:55Z" generation: 1 name: cluster resourceVersion: "536" uid: e8a5742c-6d15-44e6-8a9e-064b26ab347d spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: assisted-generated-failure-domain region: assisted-generated-region server: <vcenter_address> topology: computeCluster: /<data_center>/host/<vcenter_cluster> datacenter: <data_center> datastore: /<data_center>/datastore/<datastore> folder: "/<data_center>/path/to/folder" networks: - "VM Network" resourcePool: /<data_center>/host/<vcenter_cluster>/Resources zone: assisted-generated-zone nodeNetworking: external: {} internal: {} vcenters: - datacenters: - <data_center> server: <vcenter_address> kind: List metadata: resourceVersion: ""
Replace
<vcenter_address>
with your vCenter address. Replace<datacenter>
with the name of your vCenter data center. Replace<datastore>
with the name of your vCenter data store. Replace<folder>
with the folder containing the cluster VMs. Replace<vcenter_cluster>
with the vSphere vCenter cluster where OpenShift Container Platform is installed.Apply the infrastructures configuration:
$ oc apply -f infrastructures.config.openshift.io.yaml --overwrite=true
14.3. vSphere postinstallation configuration using the web console
After installing an OpenShift Container Platform cluster by using the Assisted Installer on vSphere with the platform integration feature enabled, you must update the following vSphere configuration settings manually:
- vCenter address
- vCenter cluster
- vCenter username
- vCenter password
- Data center
- Default data store
- Virtual machine folder
Prerequisites
- The Assisted Installer has finished installing the cluster successfully.
- The cluster is connected to console.redhat.com.
Procedure
-
In the Administrator perspective, navigate to Home
Overview. - Under Status, click vSphere connection to open the vSphere connection configuration wizard.
-
In the vCenter field, enter the network address of the vSphere vCenter server. This can be either a domain name or an IP address. It appears in the vSphere web client URL; for example
https://[your_vCenter_address]/ui
. In the vCenter cluster field, enter the name of the vSphere vCenter cluster where OpenShift Container Platform is installed.
ImportantThis step is mandatory if you installed OpenShift Container Platform 4.13 or later.
- In the Username field, enter your vSphere vCenter username.
In the Password field, enter your vSphere vCenter password.
WarningThe system stores the username and password in the
vsphere-creds
secret in thekube-system
namespace of the cluster. An incorrect vCenter username or password makes the cluster nodes unschedulable.-
In the Datacenter field, enter the name of the vSphere data center that contains the virtual machines used to host the cluster; for example,
SDDC-Datacenter
. In the Default data store field, enter the vSphere data store that stores the persistent data volumes; for example,
/SDDC-Datacenter/datastore/datastorename
.WarningUpdating the vSphere data center or default data store after the configuration has been saved detaches any active vSphere
PersistentVolumes
.-
In the Virtual Machine Folder field, enter the data center folder that contains the virtual machine of the cluster; for example,
/SDDC-Datacenter/vm/ci-ln-hjg4vg2-c61657-t2gzr
. For the OpenShift Container Platform installation to succeed, all virtual machines comprising the cluster must be located in a single data center folder. -
Click Save Configuration. This updates the
cloud-provider-config
file in theopenshift-config
namespace, and starts the configuration process. - Reopen the vSphere connection configuration wizard and expand the Monitored operators panel. Check that the status of the operators is either Progressing or Healthy.
Verification
The connection configuration process updates operator statuses and control plane nodes. It takes approximately an hour to complete. During the configuration process, the nodes will reboot. Previously bound PersistentVolumeClaims
objects might become disconnected.
Follow the steps below to monitor the configuration process.
Check that the configuration process completed successfully:
- In the Administrator perspective, navigate to Home > Overview.
- Under Status click Operators. Wait for all operator statuses to change from Progressing to All succeeded. A Failed status indicates that the configuration failed.
- Under Status, click Control Plane. Wait for the response rate of all Control Pane components to return to 100%. A Failed control plane component indicates that the configuration failed.
A failure indicates that at least one of the connection settings is incorrect. Change the settings in the vSphere connection configuration wizard and save the configuration again.
Check that you are able to bind
PersistentVolumeClaims
objects by performing the following steps:Create a
StorageClass
object using the following YAML:kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-sc provisioner: kubernetes.io/vsphere-volume parameters: datastore: YOURVCENTERDATASTORE diskformat: thin reclaimPolicy: Delete volumeBindingMode: Immediate
Create a
PersistentVolumeClaims
object using the following YAML:kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc namespace: openshift-config annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: vsphere-sc volumeMode: Filesystem
For instructions, see Dynamic provisioning in the OpenShift Container Platform documentation. To troubleshoot a
PersistentVolumeClaims
object, navigate to StoragePersistentVolumeClaims in the Administrator perspective of the OpenShift Container Platform web console.