Chapter 13. Installing on Nutanix
If you install OpenShift Container Platform on Nutanix, the Assisted Installer can integrate the OpenShift Container Platform cluster with the Nutanix platform, which exposes the Machine API to Nutanix and enables autoscaling and the dynamic provisioning of storage containers with the Nutanix Container Storage Interface (CSI).
To deploy an OpenShift Container Platform cluster and maintain its daily operation, you need access to a Nutanix account with the necessary environment requirements. For details, see Environment requirements.
13.1. Adding hosts on Nutanix with the UI
To add hosts on Nutanix with the user interface (UI), generate the minimal discovery image ISO from the Assisted Installer. This downloads a smaller image that will fetch the data needed to boot a host with networking and is the default setting. The majority of the content downloads upon boot. The ISO image is about 100MB in size.
After this is complete, you must create an image for the Nutanix platform and create the Nutanix virtual machines.
Prerequisites
- You have created a cluster profile in the Assisted Installer UI.
- You have a Nutanix cluster environment set up, and made a note of the cluster name and subnet name.
Procedure
- In the Cluster details page, select Nutanix from the Integrate with external partner platforms dropdown list. The Include custom manifest checkbox is optional.
- In the Host discovery page, click the Add hosts button.
Optional: Add an SSH public key so that you can connect to the Nutanix VMs as the
core
user. Having a login to the cluster hosts can provide you with debugging information during the installation.- If you do not have an existing SSH key pair on your local machine, follow the steps in Generating a key pair for cluster node SSH access.
-
In the SSH public key field, click Browse to upload the
id_rsa.pub
file containing the SSH public key or drag and drop the file into the field from the file manager. To see the file in the file manager, select Show hidden files in the menu.
Select the required provisioning type.
NoteMinimal image file: Provision with virtual media downloads a smaller image that will fetch the data needed to boot.
In Networking, select Cluster-managed networking. Nutanix does not support User-managed networking.
Optional: If the cluster hosts are behind a firewall that requires the use of a proxy, select Configure cluster-wide proxy settings. Enter the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server.
NoteThe proxy username and password must be URL-encoded.
- Optional: Configure the discovery image if you want to boot it with an ignition file. See Configuring the discovery image for additional details.
- Click Generate Discovery ISO.
- Copy the Discovery ISO URL.
- In the Nutanix Prism UI, follow the directions to upload the discovery image from the Assisted Installer.
In the Nutanix Prism UI, create the control plane (master) VMs through Prism Central.
-
Enter the Name. For example,
control-plane
ormaster
. - Enter the Number of VMs. This should be 3, 4, or 5 for the control plane.
- Ensure the remaining settings meet the minimum requirements for control plane hosts.
-
Enter the Name. For example,
In the Nutanix Prism UI, create the worker VMs through Prism Central.
-
Enter the Name. For example,
worker
. - Enter the Number of VMs. You should create at least 2 worker nodes.
- Ensure the remaining settings meet the minimum requirements for worker hosts.
-
Enter the Name. For example,
-
Return to the Assisted Installer user interface and wait until the Assisted Installer discovers the hosts and each of them have a
Ready
status. - Continue with the installation procedure.
13.2. Adding hosts on Nutanix with the API
To add hosts on Nutanix with the API, generate the discovery image ISO from the Assisted Installer. Use the minimal discovery image ISO. This is the default setting. The image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size.
Once this is complete, you must create an image for the Nutanix platform and create the Nutanix virtual machines.
Prerequisites
- You have set up the Assisted Installer API authentication.
- You have created an Assisted Installer cluster profile.
- You have created an Assisted Installer infrastructure environment.
-
You have your infrastructure environment ID exported in your shell as
$INFRA_ENV_ID
. - You have completed the Assisted Installer cluster configuration.
- You have a Nutanix cluster environment set up, and made a note of the cluster name and subnet name.
Procedure
- Configure the discovery image if you want it to boot with an ignition file.
Create a Nutanix cluster configuration file to hold the environment variables:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow touch ~/nutanix-cluster-env.sh
$ touch ~/nutanix-cluster-env.sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow chmod +x ~/nutanix-cluster-env.sh
$ chmod +x ~/nutanix-cluster-env.sh
If you have to start a new terminal session, you can reload the environment variables easily. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow source ~/nutanix-cluster-env.sh
$ source ~/nutanix-cluster-env.sh
Assign the Nutanix cluster’s name to the
NTX_CLUSTER_NAME
environment variable in the configuration file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat << EOF >> ~/nutanix-cluster-env.sh export NTX_CLUSTER_NAME=<cluster_name> EOF
$ cat << EOF >> ~/nutanix-cluster-env.sh export NTX_CLUSTER_NAME=<cluster_name> EOF
Replace
<cluster_name>
with the name of the Nutanix cluster.Assign the Nutanix cluster’s subnet name to the
NTX_SUBNET_NAME
environment variable in the configuration file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat << EOF >> ~/nutanix-cluster-env.sh export NTX_SUBNET_NAME=<subnet_name> EOF
$ cat << EOF >> ~/nutanix-cluster-env.sh export NTX_SUBNET_NAME=<subnet_name> EOF
Replace
<subnet_name>
with the name of the Nutanix cluster’s subnet.Refresh the API token:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow source refresh-token
$ source refresh-token
Get the download URL:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -H "Authorization: Bearer ${API_TOKEN}" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/downloads/image-url
$ curl -H "Authorization: Bearer ${API_TOKEN}" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/downloads/image-url
Create the Nutanix image configuration file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat << EOF > create-image.json { "spec": { "name": "ocp_ai_discovery_image.iso", "description": "ocp_ai_discovery_image.iso", "resources": { "architecture": "X86_64", "image_type": "ISO_IMAGE", "source_uri": "<image_url>", "source_options": { "allow_insecure_connection": true } } }, "metadata": { "spec_version": 3, "kind": "image" } } EOF
$ cat << EOF > create-image.json { "spec": { "name": "ocp_ai_discovery_image.iso", "description": "ocp_ai_discovery_image.iso", "resources": { "architecture": "X86_64", "image_type": "ISO_IMAGE", "source_uri": "<image_url>", "source_options": { "allow_insecure_connection": true } } }, "metadata": { "spec_version": 3, "kind": "image" } } EOF
Replace
<image_url>
with the image URL downloaded from the previous step.Create the Nutanix image:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -k -u <user>:'<password>' -X 'POST' \ 'https://<domain-or-ip>:<port>/api/nutanix/v3/images \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d @./create-image.json | jq '.metadata.uuid'
$ curl -k -u <user>:'<password>' -X 'POST' \ 'https://<domain-or-ip>:<port>/api/nutanix/v3/images \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d @./create-image.json | jq '.metadata.uuid'
Replace
<user>
with the Nutanix user name. Replace'<password>'
with the Nutanix password. Replace<domain-or-ip>
with the domain name or IP address of the Nutanix plaform. Replace<port>
with the port for the Nutanix server. The port defaults to9440
.Assign the returned UUID to the
NTX_IMAGE_UUID
environment variable in the configuration file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat << EOF >> ~/nutanix-cluster-env.sh export NTX_IMAGE_UUID=<uuid> EOF
$ cat << EOF >> ~/nutanix-cluster-env.sh export NTX_IMAGE_UUID=<uuid> EOF
Get the Nutanix cluster UUID:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -k -u <user>:'<password>' -X 'POST' \ 'https://<domain-or-ip>:<port>/api/nutanix/v3/clusters/list' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{
$ curl -k -u <user>:'<password>' -X 'POST' \ 'https://<domain-or-ip>:<port>/api/nutanix/v3/clusters/list' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "kind": "cluster" }' | jq '.entities[] | select(.spec.name=="<nutanix_cluster_name>") | .metadata.uuid'
Replace
<user>
with the Nutanix user name. Replace'<password>'
with the Nutanix password. Replace<domain-or-ip>
with the domain name or IP address of the Nutanix plaform. Replace<port>
with the port for the Nutanix server. The port defaults to9440
. Replace<nutanix_cluster_name>
with the name of the Nutanix cluster.Assign the returned Nutanix cluster UUID to the
NTX_CLUSTER_UUID
environment variable in the configuration file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat << EOF >> ~/nutanix-cluster-env.sh export NTX_CLUSTER_UUID=<uuid> EOF
$ cat << EOF >> ~/nutanix-cluster-env.sh export NTX_CLUSTER_UUID=<uuid> EOF
Replace
<uuid>
with the returned UUID of the Nutanix cluster.Get the Nutanix cluster’s subnet UUID:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -k -u <user>:'<password>' -X 'POST' \ 'https://<domain-or-ip>:<port>/api/nutanix/v3/subnets/list' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{
$ curl -k -u <user>:'<password>' -X 'POST' \ 'https://<domain-or-ip>:<port>/api/nutanix/v3/subnets/list' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "kind": "subnet", "filter": "name==<subnet_name>" }' | jq '.entities[].metadata.uuid'
Replace
<user>
with the Nutanix user name. Replace'<password>'
with the Nutanix password. Replace<domain-or-ip>
with the domain name or IP address of the Nutanix plaform. Replace<port>
with the port for the Nutanix server. The port defaults to9440
. Replace<subnet_name>
with the name of the cluster’s subnet.Assign the returned Nutanix subnet UUID to the
NTX_CLUSTER_UUID
environment variable in the configuration file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat << EOF >> ~/nutanix-cluster-env.sh export NTX_SUBNET_UUID=<uuid> EOF
$ cat << EOF >> ~/nutanix-cluster-env.sh export NTX_SUBNET_UUID=<uuid> EOF
Replace
<uuid>
with the returned UUID of the cluster subnet.Ensure the Nutanix environment variables are set:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow source ~/nutanix-cluster-env.sh
$ source ~/nutanix-cluster-env.sh
Create a VM configuration file for each Nutanix host. Create three to five control plane (master) VMs and at least two worker VMs. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow touch create-master-0.json
$ touch create-master-0.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat << EOF > create-master-0.json { "spec": { "name": "<host_name>", "resources": { "power_state": "ON", "num_vcpus_per_socket": 1, "num_sockets": 16, "memory_size_mib": 32768, "disk_list": [ { "disk_size_mib": 122880, "device_properties": { "device_type": "DISK" } }, { "device_properties": { "device_type": "CDROM" }, "data_source_reference": { "kind": "image", "uuid": "$NTX_IMAGE_UUID" } } ], "nic_list": [ { "nic_type": "NORMAL_NIC", "is_connected": true, "ip_endpoint_list": [ { "ip_type": "DHCP" } ], "subnet_reference": { "kind": "subnet", "name": "$NTX_SUBNET_NAME", "uuid": "$NTX_SUBNET_UUID" } } ], "guest_tools": { "nutanix_guest_tools": { "state": "ENABLED", "iso_mount_state": "MOUNTED" } } }, "cluster_reference": { "kind": "cluster", "name": "$NTX_CLUSTER_NAME", "uuid": "$NTX_CLUSTER_UUID" } }, "api_version": "3.1.0", "metadata": { "kind": "vm" } } EOF
$ cat << EOF > create-master-0.json { "spec": { "name": "<host_name>", "resources": { "power_state": "ON", "num_vcpus_per_socket": 1, "num_sockets": 16, "memory_size_mib": 32768, "disk_list": [ { "disk_size_mib": 122880, "device_properties": { "device_type": "DISK" } }, { "device_properties": { "device_type": "CDROM" }, "data_source_reference": { "kind": "image", "uuid": "$NTX_IMAGE_UUID" } } ], "nic_list": [ { "nic_type": "NORMAL_NIC", "is_connected": true, "ip_endpoint_list": [ { "ip_type": "DHCP" } ], "subnet_reference": { "kind": "subnet", "name": "$NTX_SUBNET_NAME", "uuid": "$NTX_SUBNET_UUID" } } ], "guest_tools": { "nutanix_guest_tools": { "state": "ENABLED", "iso_mount_state": "MOUNTED" } } }, "cluster_reference": { "kind": "cluster", "name": "$NTX_CLUSTER_NAME", "uuid": "$NTX_CLUSTER_UUID" } }, "api_version": "3.1.0", "metadata": { "kind": "vm" } } EOF
Replace
<host_name>
with the name of the host.Boot each Nutanix virtual machine:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -k -u <user>:'<password>' -X 'POST' \ 'https://<domain-or-ip>:<port>/api/nutanix/v3/vms' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d @./<vm_config_file_name> | jq '.metadata.uuid'
$ curl -k -u <user>:'<password>' -X 'POST' \ 'https://<domain-or-ip>:<port>/api/nutanix/v3/vms' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d @./<vm_config_file_name> | jq '.metadata.uuid'
Replace
<user>
with the Nutanix user name. Replace'<password>'
with the Nutanix password. Replace<domain-or-ip>
with the domain name or IP address of the Nutanix plaform. Replace<port>
with the port for the Nutanix server. The port defaults to9440
. Replace<vm_config_file_name>
with the name of the VM configuration file.Assign the returned VM UUID to a unique environment variable in the configuration file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat << EOF >> ~/nutanix-cluster-env.sh export NTX_MASTER_0_UUID=<uuid> EOF
$ cat << EOF >> ~/nutanix-cluster-env.sh export NTX_MASTER_0_UUID=<uuid> EOF
Replace
<uuid>
with the returned UUID of the VM.NoteThe environment variable must have a unique name for each VM.
Wait until the Assisted Installer has discovered each VM and they have passed validation.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID"
$ curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID" --header "Content-Type: application/json" -H "Authorization: Bearer $API_TOKEN" | jq '.enabled_host_count'
Modify the cluster definition to enable integration with Nutanix:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl https://api.openshift.com/api/assisted-install/v2/clusters/${CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer ${API_TOKEN}" \ -H "Content-Type: application/json" \ -d '
$ curl https://api.openshift.com/api/assisted-install/v2/clusters/${CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer ${API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "platform_type":"nutanix" } ' | jq
- Continue with the installation procedure.
13.3. Nutanix postinstallation configuration
Complete and validate the OpenShift Container Platform integration with the Nutanix cloud provider.
By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can host the RHCOS image on any HTTP server and point the installation program to the image or you can use Prism Central to upload the image manually.
13.3.1. Updating the Nutanix configuration settings
After installing OpenShift Container Platform on the Nutanix platform by using the Assisted Installer, update the following Nutanix configuration settings manually.
Prerequisites
- You have your Nutanix Prism Element username.
- You have your Nutanix Prism Element password.
- You have your Nutanix Prism storage container.
- The Assisted Installer has finished installing the cluster successfully.
- You have connected the cluster to console.redhat.com.
- You have access to the Red Hat OpenShift Container Platform command line interface.
Procedure
In the OpenShift Container Platform command line interface, update the Nutanix cluster configuration settings:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc patch infrastructure/cluster --type=merge --patch-file=/dev/stdin <<-EOF { "spec": { "platformSpec": { "nutanix": { "prismCentral": { "address": "<prismcentral_address>", "port": <prismcentral_port> }, "prismElements": [ { "endpoint": { "address": "<prismelement_address>", "port": <prismelement_port> }, "name": "<prismelement_clustername>" } ] }, "type": "Nutanix" } } } EOF
$ oc patch infrastructure/cluster --type=merge --patch-file=/dev/stdin <<-EOF { "spec": { "platformSpec": { "nutanix": { "prismCentral": { "address": "<prismcentral_address>",
1 "port": <prismcentral_port>
2 }, "prismElements": [ { "endpoint": { "address": "<prismelement_address>",
3 "port": <prismelement_port>
4 }, "name": "<prismelement_clustername>"
5 } ] }, "type": "Nutanix" } } } EOF
- 1
- Replace
<prismcentral_address>
with the Nutanix Prism Central address. - 2
- Replace
<prismcentral_port>
with the Nutanix Prism Central port. - 3
- Replace
<prismelement_address>
with the Nutanix Prism Element address. - 4
- Replace
<prismelement_port>
with the Nutanix Prism Element port. - 5
- Replace
<prismelement_clustername>
with the Nutanix Prism Element cluster name.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow infrastructure.config.openshift.io/cluster patched
infrastructure.config.openshift.io/cluster patched
For additional details, see Creating a compute machine set on Nutanix.
NoteOptional: You can define prism category key and value pairs. These category key-value pairs must exist in Prism Central. You can define the key-value pairs in separate categories for compute nodes, control plane nodes, or all nodes.
Create the Nutanix secret:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: nutanix-credentials namespace: openshift-machine-api type: Opaque stringData: credentials: | [{"type":"basic_auth","data":{"prismCentral":{"username":"${<prismcentral_username>}","password":"${<prismcentral_password>}"},"prismElements":null}}] EOF
$ cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: nutanix-credentials namespace: openshift-machine-api type: Opaque stringData: credentials: | [{"type":"basic_auth","data":{"prismCentral":{"username":"${<prismcentral_username>}","password":"${<prismcentral_password>}"},"prismElements":null}}]
1 2 EOF
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow secret/nutanix-credentials created
secret/nutanix-credentials created
When installing OpenShift Container Platform version 4.13 or later, update the Nutanix cloud provider configuration:
Get the Nutanix cloud provider configuration YAML file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get cm cloud-provider-config -o yaml -n openshift-config > cloud-provider-config-backup.yaml
$ oc get cm cloud-provider-config -o yaml -n openshift-config > cloud-provider-config-backup.yaml
Create a backup of the configuration file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cp cloud-provider-config_backup.yaml cloud-provider-config.yaml
$ cp cloud-provider-config_backup.yaml cloud-provider-config.yaml
Open the configuration YAML file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow vi cloud-provider-config.yaml
$ vi cloud-provider-config.yaml
Edit the configuration YAML file as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow kind: ConfigMap apiVersion: v1 metadata: name: cloud-provider-config namespace: openshift-config data: config: | { "prismCentral": { "address": "<prismcentral_address>", "port":<prismcentral_port>, "credentialRef": { "kind": "Secret", "name": "nutanix-credentials", "namespace": "openshift-cloud-controller-manager" } }, "topologyDiscovery": { "type": "Prism", "topologyCategories": null }, "enableCustomLabeling": true }
kind: ConfigMap apiVersion: v1 metadata: name: cloud-provider-config namespace: openshift-config data: config: | { "prismCentral": { "address": "<prismcentral_address>", "port":<prismcentral_port>, "credentialRef": { "kind": "Secret", "name": "nutanix-credentials", "namespace": "openshift-cloud-controller-manager" } }, "topologyDiscovery": { "type": "Prism", "topologyCategories": null }, "enableCustomLabeling": true }
Apply the configuration updates:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f cloud-provider-config.yaml
$ oc apply -f cloud-provider-config.yaml
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Warning: resource configmaps/cloud-provider-config is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically. configmap/cloud-provider-config configured
Warning: resource configmaps/cloud-provider-config is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically. configmap/cloud-provider-config configured
13.3.2. Creating the Nutanix CSI Operator group
Create an Operator group for the Nutanix CSI Operator.
For a description of operator groups and related concepts, see Common Operator Framework terms.
Prerequisites
- You have updated the Nutanix configuration settings.
Procedure
Open the Nutanix CSI Operator Group YAML file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow vi openshift-cluster-csi-drivers-operator-group.yaml
$ vi openshift-cluster-csi-drivers-operator-group.yaml
Edit the YAML file as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-cluster-csi-drivers namespace: openshift-cluster-csi-drivers spec: targetNamespaces: - openshift-cluster-csi-drivers upgradeStrategy: Default
apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-cluster-csi-drivers namespace: openshift-cluster-csi-drivers spec: targetNamespaces: - openshift-cluster-csi-drivers upgradeStrategy: Default
Create the Operator Group:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f openshift-cluster-csi-drivers-operator-group.yaml
$ oc create -f openshift-cluster-csi-drivers-operator-group.yaml
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow operatorgroup.operators.coreos.com/openshift-cluster-csi-driversjw9cd created
operatorgroup.operators.coreos.com/openshift-cluster-csi-driversjw9cd created
13.3.3. Installing the Nutanix CSI Operator
The Nutanix Container Storage Interface (CSI) Operator for Kubernetes deploys and manages the Nutanix CSI Driver.
For instructions on performing this step through the OpenShift Container Platform web console, see the Installing the Operator section of the Nutanix CSI Operator documentation.
Prerequisites
- You have created the Nutanix CSI Operator group.
Procedure
Get the parameter values for the Nutanix CSI Operator YAML file:
Check that the Nutanix CSI Operator exists:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get packagemanifests | grep nutanix
$ oc get packagemanifests | grep nutanix
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow nutanixcsioperator Certified Operators 129m
nutanixcsioperator Certified Operators 129m
Assign the default channel for the Operator to a BASH variable:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow DEFAULT_CHANNEL=$(oc get packagemanifests nutanixcsioperator -o jsonpath={.status.defaultChannel})
$ DEFAULT_CHANNEL=$(oc get packagemanifests nutanixcsioperator -o jsonpath={.status.defaultChannel})
Assign the starting cluster service version (CSV) for the Operator to a BASH variable:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow STARTING_CSV=$(oc get packagemanifests nutanixcsioperator -o jsonpath=\{.status.channels[*].currentCSV\})
$ STARTING_CSV=$(oc get packagemanifests nutanixcsioperator -o jsonpath=\{.status.channels[*].currentCSV\})
Assign the catalog source for the subscription to a BASH variable:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow CATALOG_SOURCE=$(oc get packagemanifests nutanixcsioperator -o jsonpath=\{.status.catalogSource\})
$ CATALOG_SOURCE=$(oc get packagemanifests nutanixcsioperator -o jsonpath=\{.status.catalogSource\})
Assign the Nutanix CSI Operator source namespace to a BASH variable:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow SOURCE_NAMESPACE=$(oc get packagemanifests nutanixcsioperator -o jsonpath=\{.status.catalogSourceNamespace\})
$ SOURCE_NAMESPACE=$(oc get packagemanifests nutanixcsioperator -o jsonpath=\{.status.catalogSourceNamespace\})
Create the Nutanix CSI Operator YAML file using the BASH variables:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat << EOF > nutanixcsioperator.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nutanixcsioperator namespace: openshift-cluster-csi-drivers spec: channel: $DEFAULT_CHANNEL installPlanApproval: Automatic name: nutanixcsioperator source: $CATALOG_SOURCE sourceNamespace: $SOURCE_NAMESPACE startingCSV: $STARTING_CSV EOF
$ cat << EOF > nutanixcsioperator.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nutanixcsioperator namespace: openshift-cluster-csi-drivers spec: channel: $DEFAULT_CHANNEL installPlanApproval: Automatic name: nutanixcsioperator source: $CATALOG_SOURCE sourceNamespace: $SOURCE_NAMESPACE startingCSV: $STARTING_CSV EOF
Create the CSI Nutanix Operator:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f nutanixcsioperator.yaml
$ oc apply -f nutanixcsioperator.yaml
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow subscription.operators.coreos.com/nutanixcsioperator created
subscription.operators.coreos.com/nutanixcsioperator created
Run the following command until the Operator subscription state changes to
AtLatestKnown
. This indicates that the Operator subscription has been created, and might take some time.Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get subscription nutanixcsioperator -n openshift-cluster-csi-drivers -o 'jsonpath={..status.state}'
$ oc get subscription nutanixcsioperator -n openshift-cluster-csi-drivers -o 'jsonpath={..status.state}'
13.3.4. Deploying the Nutanix CSI storage driver
The Nutanix Container Storage Interface (CSI) Driver for Kubernetes provides scalable and persistent storage for stateful applications.
For instructions on performing this step through the OpenShift Container Platform web console, see the Installing the CSI Driver using the Operator section of the Nutanix CSI Operator documentation.
Prerequisites
- You have installed the Nutanix CSI Operator.
Procedure
Create a
NutanixCsiStorage
resource to deploy the driver:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat <<EOF | oc create -f - apiVersion: crd.nutanix.com/v1alpha1 kind: NutanixCsiStorage metadata: name: nutanixcsistorage namespace: openshift-cluster-csi-drivers spec: {} EOF
$ cat <<EOF | oc create -f - apiVersion: crd.nutanix.com/v1alpha1 kind: NutanixCsiStorage metadata: name: nutanixcsistorage namespace: openshift-cluster-csi-drivers spec: {} EOF
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow snutanixcsistorage.crd.nutanix.com/nutanixcsistorage created
snutanixcsistorage.crd.nutanix.com/nutanixcsistorage created
Create a Nutanix secret YAML file for the CSI storage driver:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: ntnx-secret namespace: openshift-cluster-csi-drivers stringData: # prism-element-ip:prism-port:admin:password key: <prismelement_address:prismelement_port:prismcentral_username:prismcentral_password> EOF
$ cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: ntnx-secret namespace: openshift-cluster-csi-drivers stringData: # prism-element-ip:prism-port:admin:password key: <prismelement_address:prismelement_port:prismcentral_username:prismcentral_password>
1 EOF
- 1
- Replace these parameters with actual values while keeping the same format.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow secret/nutanix-secret created
secret/nutanix-secret created
13.3.5. Validating the postinstallation configurations
Verify that you can create a storage class and a bound persistent volume claim.
Prerequisites
- You have deployed the Nutanix CSI storage driver.
Procedure
Verify that you can create a storage class:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat <<EOF | oc create -f - kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: nutanix-volume annotations: storageclass.kubernetes.io/is-default-class: 'true' provisioner: csi.nutanix.com parameters: csi.storage.k8s.io/fstype: ext4 csi.storage.k8s.io/provisioner-secret-namespace: openshift-cluster-csi-drivers csi.storage.k8s.io/provisioner-secret-name: ntnx-secret storageContainer: <nutanix_storage_container> csi.storage.k8s.io/controller-expand-secret-name: ntnx-secret csi.storage.k8s.io/node-publish-secret-namespace: openshift-cluster-csi-drivers storageType: NutanixVolumes csi.storage.k8s.io/node-publish-secret-name: ntnx-secret csi.storage.k8s.io/controller-expand-secret-namespace: openshift-cluster-csi-drivers reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate EOF
$ cat <<EOF | oc create -f - kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: nutanix-volume annotations: storageclass.kubernetes.io/is-default-class: 'true' provisioner: csi.nutanix.com parameters: csi.storage.k8s.io/fstype: ext4 csi.storage.k8s.io/provisioner-secret-namespace: openshift-cluster-csi-drivers csi.storage.k8s.io/provisioner-secret-name: ntnx-secret storageContainer: <nutanix_storage_container>
1 csi.storage.k8s.io/controller-expand-secret-name: ntnx-secret csi.storage.k8s.io/node-publish-secret-namespace: openshift-cluster-csi-drivers storageType: NutanixVolumes csi.storage.k8s.io/node-publish-secret-name: ntnx-secret csi.storage.k8s.io/controller-expand-secret-namespace: openshift-cluster-csi-drivers reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate EOF
Note- 1
- Take <nutanix_storage_container> from the Nutanix configuration; for example, SelfServiceContainer.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow storageclass.storage.k8s.io/nutanix-volume created
storageclass.storage.k8s.io/nutanix-volume created
Verify that you can create the Nutanix persistent volume claim (PVC):
Create the persistent volume claim (PVC):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat <<EOF | oc create -f - kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nutanix-volume-pvc namespace: openshift-cluster-csi-drivers annotations: volume.beta.kubernetes.io/storage-provisioner: csi.nutanix.com volume.kubernetes.io/storage-provisioner: csi.nutanix.com finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: nutanix-volume volumeMode: Filesystem EOF
$ cat <<EOF | oc create -f - kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nutanix-volume-pvc namespace: openshift-cluster-csi-drivers annotations: volume.beta.kubernetes.io/storage-provisioner: csi.nutanix.com volume.kubernetes.io/storage-provisioner: csi.nutanix.com finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: nutanix-volume volumeMode: Filesystem EOF
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow persistentvolumeclaim/nutanix-volume-pvc created
persistentvolumeclaim/nutanix-volume-pvc created
Validate that the persistent volume claim (PVC) status is Bound:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pvc -n openshift-cluster-csi-drivers
$ oc get pvc -n openshift-cluster-csi-drivers
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nutanix-volume-pvc Bound nutanix-volume 52s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nutanix-volume-pvc Bound nutanix-volume 52s
Additional resources