Chapter 12. Installing on Nutanix
If you install OpenShift Container Platform on Nutanix, the Assisted Installer can integrate the OpenShift Container Platform cluster with the Nutanix platform, which exposes the Machine API to Nutanix and enables autoscaling and the dynamic provisioning of storage containers with the Nutanix Container Storage Interface (CSI).
To deploy an OpenShift Container Platform cluster and maintain its daily operation, you need access to a Nutanix account with the necessary environment requirements. For details, see "Environment requirements" in Additional resources.
12.1. Adding hosts on Nutanix with the UI Copy linkLink copied to clipboard!
To add hosts on Nutanix with the user interface (UI), generate the minimal discovery image ISO from the Assisted Installer. This downloads a smaller image that will fetch the data needed to boot a host with networking and is the default setting. The majority of the content downloads upon boot. The ISO image is about 100MB in size.
After this is complete, you must create an image for the Nutanix platform and create the Nutanix virtual machines.
Prerequisites
- You have created a cluster profile in the Assisted Installer UI.
- You have a Nutanix cluster environment set up, and made a note of the cluster name and subnet name.
Procedure
- In the Cluster details page, select Nutanix from the Integrate with external partner platforms dropdown list. The Include custom manifest checkbox is optional.
- In the Host discovery page, click the Add hosts button.
Optional: Add an SSH public key so that you can connect to the Nutanix VMs as the
coreuser. Having a login to the cluster hosts can provide you with debugging information during the installation.- If you do not have an existing SSH key pair on your local machine, follow the steps in Generating a key pair for cluster node SSH access.
-
In the SSH public key field, click Browse to upload the
id_rsa.pubfile containing the SSH public key or drag and drop the file into the field from the file manager. To see the file in the file manager, select Show hidden files in the menu.
Select the required provisioning type.
NoteMinimal image file: Provision with virtual media downloads a smaller image that will fetch the data needed to boot.
In Networking, select Cluster-managed networking. Nutanix does not support User-managed networking.
Optional: If the cluster hosts require the use of a proxy, select Configure cluster-wide proxy settings. Enter the username, password, required domains or IP addresses, and port for the HTTP and HTTPS URLs of the proxy server. If the cluster hosts are behind a firewall, allow the nodes to access the required domains or IP addresses through the firewall. See Configuring your firewall for OpenShift Container Platform for more information.
NoteThe proxy username and password must be URL-encoded.
- Optional: Configure the discovery image if you want to boot it with an ignition file. For details, see Configuring the discovery image.
- Click Generate Discovery ISO.
- Copy the Discovery ISO URL.
- In the Nutanix Prism UI, follow the directions to upload the discovery image from the Assisted Installer.
In the Nutanix Prism UI, create the control plane (master) VMs through Prism Central.
-
Enter the Name. For example,
control-planeormaster. - Enter the Number of VMs. This should be 3, 4, or 5 for the control plane.
- Ensure the remaining settings meet the minimum requirements for control plane hosts.
-
Enter the Name. For example,
In the Nutanix Prism UI, create the worker VMs through Prism Central.
-
Enter the Name. For example,
worker. - Enter the Number of VMs. You should create at least 2 worker nodes.
- Ensure the remaining settings meet the minimum requirements for worker hosts.
-
Enter the Name. For example,
-
Return to the Assisted Installer user interface and wait until the Assisted Installer discovers the hosts and each of them have a
Readystatus. - Continue with the installation procedure.
12.2. Adding hosts on Nutanix with the API Copy linkLink copied to clipboard!
To add hosts on Nutanix with the API, generate the discovery image ISO from the Assisted Installer. Use the minimal discovery image ISO. This is the default setting. The image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size.
Once this is complete, you must create an image for the Nutanix platform and create the Nutanix virtual machines.
Prerequisites
- You have set up the Assisted Installer API authentication.
- You have created an Assisted Installer cluster profile.
- You have created an Assisted Installer infrastructure environment.
-
You have your infrastructure environment ID exported in your shell as
$INFRA_ENV_ID. - You have completed the Assisted Installer cluster configuration.
- You have a Nutanix cluster environment set up, and made a note of the cluster name and subnet name.
Procedure
- Configure the discovery image if you want it to boot with an ignition file.
Create a Nutanix cluster configuration file to hold the environment variables:
$ touch ~/nutanix-cluster-env.sh$ chmod +x ~/nutanix-cluster-env.shIf you have to start a new terminal session, you can reload the environment variables easily. For example:
$ source ~/nutanix-cluster-env.shAssign the Nutanix cluster’s name to the
NTX_CLUSTER_NAMEenvironment variable in the configuration file:$ cat << EOF >> ~/nutanix-cluster-env.sh export NTX_CLUSTER_NAME=<cluster_name> EOFReplace
<cluster_name>with the name of the Nutanix cluster.Assign the Nutanix cluster’s subnet name to the
NTX_SUBNET_NAMEenvironment variable in the configuration file:$ cat << EOF >> ~/nutanix-cluster-env.sh export NTX_SUBNET_NAME=<subnet_name> EOFReplace
<subnet_name>with the name of the Nutanix cluster’s subnet.Refresh the API token:
$ source refresh-tokenGet the download URL:
$ curl -H "Authorization: Bearer ${API_TOKEN}" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/downloads/image-urlCreate the Nutanix image configuration file:
$ cat << EOF > create-image.json { "spec": { "name": "ocp_ai_discovery_image.iso", "description": "ocp_ai_discovery_image.iso", "resources": { "architecture": "X86_64", "image_type": "ISO_IMAGE", "source_uri": "<image_url>", "source_options": { "allow_insecure_connection": true } } }, "metadata": { "spec_version": 3, "kind": "image" } } EOFReplace
<image_url>with the image URL downloaded from the previous step.Create the Nutanix image:
$ curl -k -u <user>:'<password>' -X 'POST' \ 'https://<domain-or-ip>:<port>/api/nutanix/v3/images \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d @./create-image.json | jq '.metadata.uuid'-
Replace
<user>with the Nutanix user name. -
Replace
'<password>'with the Nutanix password. -
Replace
<domain-or-ip>with the domain name or IP address of the Nutanix plaform. -
Replace
<port>with the port for the Nutanix server. The port defaults to9440.
-
Replace
Assign the returned UUID to the
NTX_IMAGE_UUIDenvironment variable in the configuration file:$ cat << EOF >> ~/nutanix-cluster-env.sh export NTX_IMAGE_UUID=<uuid> EOFGet the Nutanix cluster UUID:
$ curl -k -u <user>:'<password>' -X 'POST' \ 'https://<domain-or-ip>:<port>/api/nutanix/v3/clusters/list' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "kind": "cluster" }' | jq '.entities[] | select(.spec.name=="<nutanix_cluster_name>") | .metadata.uuid'-
Replace
<user>with the Nutanix user name. -
Replace
'<password>'with the Nutanix password. -
Replace
<domain-or-ip>with the domain name or IP address of the Nutanix platform. -
Replace
<port>with the port for the Nutanix server. The port defaults to9440. -
Replace
<nutanix_cluster_name>with the name of the Nutanix cluster.
-
Replace
Assign the returned Nutanix cluster UUID to the
NTX_CLUSTER_UUIDenvironment variable in the configuration file:$ cat << EOF >> ~/nutanix-cluster-env.sh export NTX_CLUSTER_UUID=<uuid> EOFReplace
<uuid>with the returned UUID of the Nutanix cluster.Get the Nutanix cluster’s subnet UUID:
$ curl -k -u <user>:'<password>' -X 'POST' \ 'https://<domain-or-ip>:<port>/api/nutanix/v3/subnets/list' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "kind": "subnet", "filter": "name==<subnet_name>" }' | jq '.entities[].metadata.uuid'-
Replace
<user>with the Nutanix user name. -
Replace
'<password>'with the Nutanix password. -
Replace
<domain-or-ip>with the domain name or IP address of the Nutanix platform. -
Replace
<port>with the port for the Nutanix server. The port defaults to9440. -
Replace
<subnet_name>with the name of the cluster’s subnet.
-
Replace
Assign the returned Nutanix subnet UUID to the
NTX_CLUSTER_UUIDenvironment variable in the configuration file:$ cat << EOF >> ~/nutanix-cluster-env.sh export NTX_SUBNET_UUID=<uuid> EOFReplace
<uuid>with the returned UUID of the cluster subnet.Ensure the Nutanix environment variables are set:
$ source ~/nutanix-cluster-env.shCreate a VM configuration file for each Nutanix host. Create three to five control plane (master) VMs and at least two worker VMs. For example:
$ touch create-master-0.json$ cat << EOF > create-master-0.json { "spec": { "name": "<host_name>", "resources": { "power_state": "ON", "num_vcpus_per_socket": 1, "num_sockets": 16, "memory_size_mib": 32768, "disk_list": [ { "disk_size_mib": 122880, "device_properties": { "device_type": "DISK" } }, { "device_properties": { "device_type": "CDROM" }, "data_source_reference": { "kind": "image", "uuid": "$NTX_IMAGE_UUID" } } ], "nic_list": [ { "nic_type": "NORMAL_NIC", "is_connected": true, "ip_endpoint_list": [ { "ip_type": "DHCP" } ], "subnet_reference": { "kind": "subnet", "name": "$NTX_SUBNET_NAME", "uuid": "$NTX_SUBNET_UUID" } } ], "guest_tools": { "nutanix_guest_tools": { "state": "ENABLED", "iso_mount_state": "MOUNTED" } } }, "cluster_reference": { "kind": "cluster", "name": "$NTX_CLUSTER_NAME", "uuid": "$NTX_CLUSTER_UUID" } }, "api_version": "3.1.0", "metadata": { "kind": "vm" } } EOFReplace
<host_name>with the name of the host.Boot each Nutanix virtual machine:
$ curl -k -u <user>:'<password>' -X 'POST' \ 'https://<domain-or-ip>:<port>/api/nutanix/v3/vms' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d @./<vm_config_file_name> | jq '.metadata.uuid'-
Replace
<user>with the Nutanix user name. -
Replace
'<password>'with the Nutanix password. -
Replace
<domain-or-ip>with the domain name or IP address of the Nutanix platform. -
Replace
<port>with the port for the Nutanix server. The port defaults to9440. -
Replace
<vm_config_file_name>with the name of the VM configuration file.
-
Replace
Assign the returned VM UUID to a unique environment variable in the configuration file:
$ cat << EOF >> ~/nutanix-cluster-env.sh export NTX_MASTER_0_UUID=<uuid> EOFReplace
<uuid>with the returned UUID of the VM.NoteThe environment variable must have a unique name for each VM.
Wait until the Assisted Installer has discovered each VM and they have passed validation.
$ curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID" --header "Content-Type: application/json" -H "Authorization: Bearer $API_TOKEN" | jq '.enabled_host_count'Modify the cluster definition to enable integration with Nutanix:
$ curl https://api.openshift.com/api/assisted-install/v2/clusters/${CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer ${API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "platform_type":"nutanix" } ' | jq- Continue with the installation procedure.
12.3. Nutanix postinstallation configuration Copy linkLink copied to clipboard!
Complete and validate the OpenShift Container Platform integration with the Nutanix cloud provider.
By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can host the RHCOS image on any HTTP server and point the installation program to the image or you can use Prism Central to upload the image manually.
12.3.1. Updating the Nutanix configuration settings Copy linkLink copied to clipboard!
After installing OpenShift Container Platform on the Nutanix platform by using the Assisted Installer, update the following Nutanix configuration settings manually.
Prerequisites
- You have your Nutanix Prism Element username.
- You have your Nutanix Prism Element password.
- You have your Nutanix Prism storage container.
- The Assisted Installer has finished installing the cluster successfully.
- You have connected the cluster to the Red Hat console.redhat.com.
- You have access to the Red Hat OpenShift Container Platform command line interface.
Procedure
In the OpenShift Container Platform command line interface, update the Nutanix cluster configuration settings:
$ oc patch infrastructure/cluster --type=merge --patch-file=/dev/stdin <<-EOF { "spec": { "platformSpec": { "nutanix": { "prismCentral": { "address": "<prismcentral_address>", "port": <prismcentral_port> }, "prismElements": [ { "endpoint": { "address": "<prismelement_address>", "port": <prismelement_port> }, "name": "<prismelement_clustername>" } ] }, "type": "Nutanix" } } } EOF-
Replace
<prismcentral_address>with the Nutanix Prism Central address. -
Replace
<prismcentral_port>with the Nutanix Prism Central port. -
Replace
<prismelement_address>with the Nutanix Prism Element address. -
Replace
<prismelement_port>with the Nutanix Prism Element port. -
Replace
<prismelement_clustername>with the Nutanix Prism Element cluster name.
Example output:
infrastructure.config.openshift.io/cluster patchedFor additional details, see Creating a compute machine set on Nutanix.
NoteOptional: You can define prism category key and value pairs. These category key-value pairs must exist in Prism Central. You can define the key-value pairs in separate categories for compute nodes, control plane nodes, or all nodes.
-
Replace
Create the Nutanix secret:
$ cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: nutanix-credentials namespace: openshift-machine-api type: Opaque stringData: credentials: | [{"type":"basic_auth","data":{"prismCentral":{"username":"${<prismcentral_username>}","password":"${<prismcentral_password>}"},"prismElements":null}}] EOF-
Replace
<prismcentral_username>with the Nutanix Prism Central username. -
Replace
<prismcentral_password>with the Nutanix Prism Central password.
Example output:
secret/nutanix-credentials created-
Replace
When installing OpenShift Container Platform version 4.13 or later, update the Nutanix cloud provider configuration:
Get the Nutanix cloud provider configuration YAML file:
$ oc get cm cloud-provider-config -o yaml -n openshift-config > cloud-provider-config-backup.yamlCreate a backup of the configuration file:
$ cp cloud-provider-config_backup.yaml cloud-provider-config.yamlOpen the configuration YAML file:
$ vi cloud-provider-config.yamlEdit the configuration YAML file as follows:
kind: ConfigMap apiVersion: v1 metadata: name: cloud-provider-config namespace: openshift-config data: config: | { "prismCentral": { "address": "<prismcentral_address>", "port":<prismcentral_port>, "credentialRef": { "kind": "Secret", "name": "nutanix-credentials", "namespace": "openshift-cloud-controller-manager" } }, "topologyDiscovery": { "type": "Prism", "topologyCategories": null }, "enableCustomLabeling": true }Apply the configuration updates:
$ oc apply -f cloud-provider-config.yamlExample output:
Warning: resource configmaps/cloud-provider-config is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically. configmap/cloud-provider-config configured
12.3.2. Creating the Nutanix CSI Operator group Copy linkLink copied to clipboard!
Create an Operator group for the Nutanix CSI Operator.
For a description of operator groups and related concepts, see Understanding Operators.
Prerequisites
- You have updated the Nutanix configuration settings.
Procedure
Open the Nutanix CSI Operator Group YAML file:
$ vi openshift-cluster-csi-drivers-operator-group.yamlEdit the YAML file as follows:
apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-cluster-csi-drivers namespace: openshift-cluster-csi-drivers spec: targetNamespaces: - openshift-cluster-csi-drivers upgradeStrategy: DefaultCreate the Operator Group:
$ oc create -f openshift-cluster-csi-drivers-operator-group.yamlExample output:
operatorgroup.operators.coreos.com/openshift-cluster-csi-driversjw9cd created
12.3.3. Installing the Nutanix CSI Operator Copy linkLink copied to clipboard!
The Nutanix Container Storage Interface (CSI) Operator for Kubernetes deploys and manages the Nutanix CSI Driver.
For instructions on performing this step through the OpenShift Container Platform web console, see the Installing the Operator section of the Nutanix CSI Operator documentation.
Prerequisites
- You have created the Nutanix CSI Operator group.
Procedure
Get the parameter values for the Nutanix CSI Operator YAML file:
Check that the Nutanix CSI Operator exists:
$ oc get packagemanifests | grep nutanixExample output:
nutanixcsioperator Certified Operators 129mAssign the default channel for the Operator to a BASH variable:
$ DEFAULT_CHANNEL=$(oc get packagemanifests nutanixcsioperator -o jsonpath={.status.defaultChannel})Assign the starting cluster service version (CSV) for the Operator to a BASH variable:
$ STARTING_CSV=$(oc get packagemanifests nutanixcsioperator -o jsonpath=\{.status.channels[*].currentCSV\})Assign the catalog source for the subscription to a BASH variable:
$ CATALOG_SOURCE=$(oc get packagemanifests nutanixcsioperator -o jsonpath=\{.status.catalogSource\})Assign the Nutanix CSI Operator source namespace to a BASH variable:
$ SOURCE_NAMESPACE=$(oc get packagemanifests nutanixcsioperator -o jsonpath=\{.status.catalogSourceNamespace\})
Create the Nutanix CSI Operator YAML file using the BASH variables:
$ cat << EOF > nutanixcsioperator.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nutanixcsioperator namespace: openshift-cluster-csi-drivers spec: channel: $DEFAULT_CHANNEL installPlanApproval: Automatic name: nutanixcsioperator source: $CATALOG_SOURCE sourceNamespace: $SOURCE_NAMESPACE startingCSV: $STARTING_CSV EOFCreate the CSI Nutanix Operator:
$ oc apply -f nutanixcsioperator.yamlExample output:
subscription.operators.coreos.com/nutanixcsioperator createdRun the following command until the Operator subscription state changes to
AtLatestKnown. This indicates that the Operator subscription has been created, and might take some time.$ oc get subscription nutanixcsioperator -n openshift-cluster-csi-drivers -o 'jsonpath={..status.state}'
12.3.4. Deploying the Nutanix CSI storage driver Copy linkLink copied to clipboard!
The Nutanix Container Storage Interface (CSI) Driver for Kubernetes provides scalable and persistent storage for stateful applications.
For instructions on performing this step through the OpenShift Container Platform web console, see the Installing the CSI Driver using the Operator section of the Nutanix CSI Operator documentation.
Prerequisites
- You have installed the Nutanix CSI Operator.
Procedure
Create a
NutanixCsiStorageresource to deploy the driver:$ cat <<EOF | oc create -f - apiVersion: crd.nutanix.com/v1alpha1 kind: NutanixCsiStorage metadata: name: nutanixcsistorage namespace: openshift-cluster-csi-drivers spec: {} EOFExample output:
snutanixcsistorage.crd.nutanix.com/nutanixcsistorage createdCreate a Nutanix secret YAML file for the CSI storage driver:
$ cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: ntnx-secret namespace: openshift-cluster-csi-drivers stringData: # prism-element-ip:prism-port:admin:password key: <prismelement_address:prismelement_port:prismcentral_username:prismcentral_password> EOFReplace the
keyparameters with actual values while keeping the same format.Example output:
secret/nutanix-secret created
12.3.5. Validating the postinstallation configurations Copy linkLink copied to clipboard!
Verify that you can create a storage class and a bound persistent volume claim.
Prerequisites
- You have deployed the Nutanix CSI storage driver.
Procedure
Verify that you can create a storage class:
$ cat <<EOF | oc create -f - kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: nutanix-volume annotations: storageclass.kubernetes.io/is-default-class: 'true' provisioner: csi.nutanix.com parameters: csi.storage.k8s.io/fstype: ext4 csi.storage.k8s.io/provisioner-secret-namespace: openshift-cluster-csi-drivers csi.storage.k8s.io/provisioner-secret-name: ntnx-secret storageContainer: <nutanix_storage_container> csi.storage.k8s.io/controller-expand-secret-name: ntnx-secret csi.storage.k8s.io/node-publish-secret-namespace: openshift-cluster-csi-drivers storageType: NutanixVolumes csi.storage.k8s.io/node-publish-secret-name: ntnx-secret csi.storage.k8s.io/controller-expand-secret-namespace: openshift-cluster-csi-drivers reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate EOFNoteTake
<nutanix_storage_container>from the Nutanix configuration; for example,SelfServiceContainer.Example output:
storageclass.storage.k8s.io/nutanix-volume createdVerify that you can create the Nutanix persistent volume claim (PVC):
Create the persistent volume claim (PVC):
$ cat <<EOF | oc create -f - kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nutanix-volume-pvc namespace: openshift-cluster-csi-drivers annotations: volume.beta.kubernetes.io/storage-provisioner: csi.nutanix.com volume.kubernetes.io/storage-provisioner: csi.nutanix.com finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: nutanix-volume volumeMode: Filesystem EOFExample output:
persistentvolumeclaim/nutanix-volume-pvc createdValidate that the persistent volume claim (PVC) status is Bound:
$ oc get pvc -n openshift-cluster-csi-driversExample output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nutanix-volume-pvc Bound nutanix-volume 52s