Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 13. Optional: Installing on Nutanix


If you install OpenShift Container Platform on Nutanix, the Assisted Installer can integrate the OpenShift Container Platform cluster with the Nutanix platform, which exposes the Machine API to Nutanix and enables autoscaling and dynamically provisioning storage containers with the Nutanix Container Storage Interface (CSI).

13.1. Adding hosts on Nutanix with the UI

To add hosts on Nutanix with the user interface (UI), generate the discovery image ISO from the Assisted Installer. Use the minimal discovery image ISO. This is the default setting. The image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size.

After this is complete, you must create an image for the Nutanix platform and create the Nutanix virtual machines.

Prerequisites

  • You have created a cluster profile in the Assisted Installer UI.
  • You have a Nutanix cluster environment set up, and made a note of the cluster name and subnet name.

Procedure

  1. In Cluster details, select Nutanix from the Integrate with external partner platforms dropdown list. The Include custom manifest checkbox is optional.
  2. In Host discovery, click the Add hosts button.
  3. Optional: Add an SSH public key so that you can connect to the Nutanix VMs as the core user. Having a login to the cluster hosts can provide you with debugging information during the installation.

    1. If you do not have an existing SSH key pair on your local machine, follow the steps in Generating a key pair for cluster node SSH access.
    2. In the SSH public key field, click Browse to upload the id_rsa.pub file containing the SSH public key. Alternatively, drag and drop the file into the field from the file manager. To see the file in the file manager, select Show hidden files in the menu.
  4. Select the desired provisioning type.

    Note

    Minimal image file: Provision with virtual media downloads a smaller image that will fetch the data needed to boot.

  5. In Networking, select Cluster-managed networking. Nutanix does not support User-managed networking.

    1. Optional: If the cluster hosts are behind a firewall that requires the use of a proxy, select Configure cluster-wide proxy settings. Enter the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server.
    2. Optional: Configure the discovery image if you want to boot it with an ignition file. See Configuring the discovery image for additional details.
  6. Click Generate Discovery ISO.
  7. Copy the Discovery ISO URL.
  8. In the Nutanix Prism UI, follow the directions to upload the discovery image from the Assisted Installer.
  9. In the Nutanix Prism UI, create the control plane (master) VMs through Prism Central.

    1. Enter the Name. For example, control-plane or master.
    2. Enter the Number of VMs. This should be 3 for the control plane.
    3. Ensure the remaining settings meet the minimum requirements for control plane hosts.
  10. In the Nutanix Prism UI, create the worker VMs through Prism Central.

    1. Enter the Name. For example, worker.
    2. Enter the Number of VMs. You should create at least 2 worker nodes.
    3. Ensure the remaining settings meet the minimum requirements for worker hosts.
  11. Return to the Assisted Installer user interface and wait until the Assisted Installer discovers the hosts and each of them have a Ready status.
  12. Continue with the installation procedure.

13.2. Adding hosts on Nutanix with the API

To add hosts on Nutanix with the API, generate the discovery image ISO from the Assisted Installer. Use the minimal discovery image ISO. This is the default setting. The image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size.

Once this is complete, you must create an image for the Nutanix platform and create the Nutanix virtual machines.

Prerequisites

  • You have set up the Assisted Installer API authentication.
  • You have created an Assisted Installer cluster profile.
  • You have created an Assisted Installer infrastructure environment.
  • You have your infrastructure environment ID exported in your shell as $INFRA_ENV_ID.
  • You have completed the Assisted Installer cluster configuration.
  • You have a Nutanix cluster environment set up, and made a note of the cluster name and subnet name.

Procedure

  1. Configure the discovery image if you want it to boot with an ignition file.
  2. Create a Nutanix cluster configuration file to hold the environment variables:

    $ touch ~/nutanix-cluster-env.sh
    Copy to Clipboard Toggle word wrap
    $ chmod +x ~/nutanix-cluster-env.sh
    Copy to Clipboard Toggle word wrap

    If you have to start a new terminal session, you can reload the environment variables easily. For example:

    $ source ~/nutanix-cluster-env.sh
    Copy to Clipboard Toggle word wrap
  3. Assign the Nutanix cluster’s name to the NTX_CLUSTER_NAME environment variable in the configuration file:

    $ cat << EOF >> ~/nutanix-cluster-env.sh
    export NTX_CLUSTER_NAME=<cluster_name>
    EOF
    Copy to Clipboard Toggle word wrap

    Replace <cluster_name> with the name of the Nutanix cluster.

  4. Assign the Nutanix cluster’s subnet name to the NTX_SUBNET_NAME environment variable in the configuration file:

    $ cat << EOF >> ~/nutanix-cluster-env.sh
    export NTX_SUBNET_NAME=<subnet_name>
    EOF
    Copy to Clipboard Toggle word wrap

    Replace <subnet_name> with the name of the Nutanix cluster’s subnet.

  5. Refresh the API token:

    $ source refresh-token
    Copy to Clipboard Toggle word wrap
  6. Get the download URL:

    $ curl -H "Authorization: Bearer ${API_TOKEN}" \
    https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/downloads/image-url
    Copy to Clipboard Toggle word wrap
  7. Create the Nutanix image configuration file:

    $ cat << EOF > create-image.json
    {
      "spec": {
        "name": "ocp_ai_discovery_image.iso",
        "description": "ocp_ai_discovery_image.iso",
        "resources": {
          "architecture": "X86_64",
          "image_type": "ISO_IMAGE",
          "source_uri": "<image_url>",
          "source_options": {
            "allow_insecure_connection": true
          }
        }
      },
      "metadata": {
        "spec_version": 3,
        "kind": "image"
      }
    }
    EOF
    Copy to Clipboard Toggle word wrap

    Replace <image_url> with the image URL downloaded from the previous step.

  8. Create the Nutanix image:

    $ curl  -k -u <user>:'<password>' -X 'POST' \
    'https://<domain-or-ip>:<port>/api/nutanix/v3/images \
      -H 'accept: application/json' \
      -H 'Content-Type: application/json' \
      -d @./create-image.json | jq '.metadata.uuid'
    Copy to Clipboard Toggle word wrap

    Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440.

  9. Assign the returned UUID to the NTX_IMAGE_UUID environment variable in the configuration file:

    $ cat << EOF >> ~/nutanix-cluster-env.sh
    export NTX_IMAGE_UUID=<uuid>
    EOF
    Copy to Clipboard Toggle word wrap
  10. Get the Nutanix cluster UUID:

    $ curl -k -u <user>:'<password>' -X 'POST' \
      'https://<domain-or-ip>:<port>/api/nutanix/v3/clusters/list' \
      -H 'accept: application/json' \
      -H 'Content-Type: application/json' \
      -d '{
      "kind": "cluster"
    }'  | jq '.entities[] | select(.spec.name=="<nutanix_cluster_name>") | .metadata.uuid'
    Copy to Clipboard Toggle word wrap

    Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440. Replace <nutanix_cluster_name> with the name of the Nutanix cluster.

  11. Assign the returned Nutanix cluster UUID to the NTX_CLUSTER_UUID environment variable in the configuration file:

    $ cat << EOF >> ~/nutanix-cluster-env.sh
    export NTX_CLUSTER_UUID=<uuid>
    EOF
    Copy to Clipboard Toggle word wrap

    Replace <uuid> with the returned UUID of the Nutanix cluster.

  12. Get the Nutanix cluster’s subnet UUID:

    $ curl -k -u <user>:'<password>' -X 'POST' \
      'https://<domain-or-ip>:<port>/api/nutanix/v3/subnets/list' \
      -H 'accept: application/json' \
      -H 'Content-Type: application/json' \
      -d '{
      "kind": "subnet",
      "filter": "name==<subnet_name>"
    }' | jq '.entities[].metadata.uuid'
    Copy to Clipboard Toggle word wrap

    Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440. Replace <subnet_name> with the name of the cluster’s subnet.

  13. Assign the returned Nutanix subnet UUID to the NTX_CLUSTER_UUID environment variable in the configuration file:

    $ cat << EOF >> ~/nutanix-cluster-env.sh
    export NTX_SUBNET_UUID=<uuid>
    EOF
    Copy to Clipboard Toggle word wrap

    Replace <uuid> with the returned UUID of the cluster subnet.

  14. Ensure the Nutanix environment variables are set:

    $ source ~/nutanix-cluster-env.sh
    Copy to Clipboard Toggle word wrap
  15. Create a VM configuration file for each Nutanix host. Create three control plane (master) VMs and at least two worker VMs. For example:

    $ touch create-master-0.json
    Copy to Clipboard Toggle word wrap
    $ cat << EOF > create-master-0.json
    {
       "spec": {
          "name": "<host_name>",
          "resources": {
             "power_state": "ON",
             "num_vcpus_per_socket": 1,
             "num_sockets": 16,
             "memory_size_mib": 32768,
             "disk_list": [
                {
                   "disk_size_mib": 122880,
                   "device_properties": {
                      "device_type": "DISK"
                   }
                },
                {
                   "device_properties": {
                      "device_type": "CDROM"
                   },
                   "data_source_reference": {
                     "kind": "image",
                     "uuid": "$NTX_IMAGE_UUID"
                  }
                }
             ],
             "nic_list": [
                {
                   "nic_type": "NORMAL_NIC",
                   "is_connected": true,
                   "ip_endpoint_list": [
                      {
                         "ip_type": "DHCP"
                      }
                   ],
                   "subnet_reference": {
                      "kind": "subnet",
                      "name": "$NTX_SUBNET_NAME",
                      "uuid": "$NTX_SUBNET_UUID"
                   }
                }
             ],
             "guest_tools": {
                "nutanix_guest_tools": {
                   "state": "ENABLED",
                   "iso_mount_state": "MOUNTED"
                }
             }
          },
          "cluster_reference": {
             "kind": "cluster",
             "name": "$NTX_CLUSTER_NAME",
             "uuid": "$NTX_CLUSTER_UUID"
          }
       },
       "api_version": "3.1.0",
       "metadata": {
          "kind": "vm"
       }
    }
    EOF
    Copy to Clipboard Toggle word wrap

    Replace <host_name> with the name of the host.

  16. Boot each Nutanix virtual machine:

    $ curl -k -u <user>:'<password>' -X 'POST' \
      'https://<domain-or-ip>:<port>/api/nutanix/v3/vms' \
      -H 'accept: application/json' \
      -H 'Content-Type: application/json' \
      -d @./<vm_config_file_name> | jq '.metadata.uuid'
    Copy to Clipboard Toggle word wrap

    Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440. Replace <vm_config_file_name> with the name of the VM configuration file.

  17. Assign the returned VM UUID to a unique environment variable in the configuration file:

    $ cat << EOF >> ~/nutanix-cluster-env.sh
    export NTX_MASTER_0_UUID=<uuid>
    EOF
    Copy to Clipboard Toggle word wrap

    Replace <uuid> with the returned UUID of the VM.

    Note

    The environment variable must have a unique name for each VM.

  18. Wait until the Assisted Installer has discovered each VM and they have passed validation.

    $ curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID"
    --header "Content-Type: application/json"
    -H "Authorization: Bearer $API_TOKEN"
    | jq '.enabled_host_count'
    Copy to Clipboard Toggle word wrap
  19. Modify the cluster definition to enable integration with Nutanix:

    $ curl https://api.openshift.com/api/assisted-install/v2/clusters/${CLUSTER_ID} \
    -X PATCH \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d '
    {
        "platform_type":"nutanix"
    }
    ' | jq
    Copy to Clipboard Toggle word wrap
  20. Continue with the installation procedure.

13.3. Nutanix post-installation configuration

Follow the steps below to complete and validate the OpenShift Container Platform integration with the Nutanix cloud provider.

Prerequisites

  • The Assisted Installer has finished installing the cluster successfully.
  • The cluster is connected to console.redhat.com.
  • You have access to the Red Hat OpenShift Container Platform command line interface.

13.3.1. Updating the Nutanix configuration settings

After installing OpenShift Container Platform on the Nutanix platform using the Assisted Installer, you must update the following Nutanix configuration settings manually:

  • <prismcentral_username>: The Nutanix Prism Central username.
  • <prismcentral_password>: The Nutanix Prism Central password.
  • <prismcentral_address>: The Nutanix Prism Central address.
  • <prismcentral_port>: The Nutanix Prism Central port.
  • <prismelement_username>: The Nutanix Prism Element username.
  • <prismelement_password>: The Nutanix Prism Element password.
  • <prismelement_address>: The Nutanix Prism Element address.
  • <prismelement_port>: The Nutanix Prism Element port.
  • <prismelement_clustername>: The Nutanix Prism Element cluster name.
  • <nutanix_storage_container>: The Nutanix Prism storage container.

Procedure

  1. In the OpenShift Container Platform command line interface, update the Nutanix cluster configuration settings:

    $ oc patch infrastructure/cluster --type=merge --patch-file=/dev/stdin <<-EOF
    {
      "spec": {
        "platformSpec": {
          "nutanix": {
            "prismCentral": {
              "address": "<prismcentral_address>",
              "port": <prismcentral_port>
            },
            "prismElements": [
              {
                "endpoint": {
                  "address": "<prismelement_address>",
                  "port": <prismelement_port>
                },
                "name": "<prismelement_clustername>"
              }
            ]
          },
          "type": "Nutanix"
        }
      }
    }
    EOF
    Copy to Clipboard Toggle word wrap

    Sample output

    infrastructure.config.openshift.io/cluster patched
    Copy to Clipboard Toggle word wrap

    For additional details, see Creating a machine set on Nutanix.

  2. Create the Nutanix secret:

    $ cat <<EOF | oc create -f -
    apiVersion: v1
    kind: Secret
    metadata:
       name: nutanix-credentials
       namespace: openshift-machine-api
    type: Opaque
    stringData:
      credentials: |
    [{"type":"basic_auth","data":{"prismCentral":{"username":"${<prismcentral_username>}","password":"${<prismcentral_password>}"},"prismElements":null}}]
    EOF
    Copy to Clipboard Toggle word wrap

    Sample output

    secret/nutanix-credentials created
    Copy to Clipboard Toggle word wrap

  3. When installing OpenShift Container Platform version 4.13 or later, update the Nutanix cloud provider configuration:

    1. Get the Nutanix cloud provider configuration YAML file:

      $ oc get cm cloud-provider-config -o yaml -n openshift-config > cloud-provider-config-backup.yaml
      Copy to Clipboard Toggle word wrap
    2. Create a backup of the configuration file:

      $ cp cloud-provider-config_backup.yaml cloud-provider-config.yaml
      Copy to Clipboard Toggle word wrap
    3. Open the configuration YAML file:

      $ vi cloud-provider-config.yaml
      Copy to Clipboard Toggle word wrap
    4. Edit the configuration YAML file as follows:

      kind: ConfigMap
      apiVersion: v1
      metadata:
        name: cloud-provider-config
        namespace: openshift-config
      data:
        config: |
          {
          	"prismCentral": {
          		"address": "<prismcentral_address>",
          		"port":<prismcentral_port>,
          		"credentialRef": {
          		   "kind": "Secret",
          		   "name": "nutanix-credentials",
          		   "namespace": "openshift-cloud-controller-manager"
          		}
          	},
          	"topologyDiscovery": {
          		"type": "Prism",
          		"topologyCategories": null
          	},
          	"enableCustomLabeling": true
          }
      Copy to Clipboard Toggle word wrap
    5. Apply the configuration updates:

      $ oc apply -f cloud-provider-config.yaml
      Copy to Clipboard Toggle word wrap

      Sample output

      Warning: resource configmaps/cloud-provider-config is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically.
      
      configmap/cloud-provider-config configured
      Copy to Clipboard Toggle word wrap

13.3.2. Creating the Nutanix CSI Operator group

Create an Operator group for the Nutanix CSI Operator.

Note

For a description of operator groups and related concepts, see Common Operator Framework Terms in Additional Resources.

Procedure

  1. Open the Nutanix CSI Operator Group YAML file:

    $ vi openshift-cluster-csi-drivers-operator-group.yaml
    Copy to Clipboard Toggle word wrap
  2. Edit the YAML file as follows:

    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      generateName: openshift-cluster-csi-drivers
      namespace: openshift-cluster-csi-drivers
    spec:
      targetNamespaces:
      - openshift-cluster-csi-drivers
      upgradeStrategy: Default
    Copy to Clipboard Toggle word wrap
  3. Create the Operator Group:

    $ oc create -f openshift-cluster-csi-drivers-operator-group.yaml
    Copy to Clipboard Toggle word wrap

    Sample output

    operatorgroup.operators.coreos.com/openshift-cluster-csi-driversjw9cd created
    Copy to Clipboard Toggle word wrap

13.3.3. Installing the Nutanix CSI Operator

The Nutanix Container Storage Interface (CSI) Operator for Kubernetes deploys and manages the Nutanix CSI Driver.

Note

For instructions on performing this step through the Red Hat OpenShift Container Platform, see the Installing the Operator section of the Nutanix CSI Operator document in Additional Resources.

Procedure

  1. Get the parameter values for the Nutanix CSI Operator YAML file:

    1. Check that the Nutanix CSI Operator exists:

      $ oc get packagemanifests | grep nutanix
      Copy to Clipboard Toggle word wrap

      Sample output

      nutanixcsioperator   Certified Operators   129m
      Copy to Clipboard Toggle word wrap

    2. Assign the default channel for the Operator to a BASH variable:

      $ DEFAULT_CHANNEL=$(oc get packagemanifests nutanixcsioperator -o jsonpath={.status.defaultChannel})
      Copy to Clipboard Toggle word wrap
    3. Assign the starting cluster service version (CSV) for the Operator to a BASH variable:

      $ STARTING_CSV=$(oc get packagemanifests nutanixcsioperator -o jsonpath=\{.status.channels[*].currentCSV\})
      Copy to Clipboard Toggle word wrap
    4. Assign the catalog source for the subscription to a BASH variable:

      $ CATALOG_SOURCE=$(oc get packagemanifests nutanixcsioperator -o jsonpath=\{.status.catalogSource\})
      Copy to Clipboard Toggle word wrap
    5. Assign the Nutanix CSI Operator source namespace to a BASH variable:

      $ SOURCE_NAMESPACE=$(oc get packagemanifests nutanixcsioperator -o jsonpath=\{.status.catalogSourceNamespace\})
      Copy to Clipboard Toggle word wrap
  2. Create the Nutanix CSI Operator YAML file using the BASH variables:

    $ cat << EOF > nutanixcsioperator.yaml
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: nutanixcsioperator
      namespace: openshift-cluster-csi-drivers
    spec:
      channel: $DEFAULT_CHANNEL
      installPlanApproval: Automatic
      name: nutanixcsioperator
      source: $CATALOG_SOURCE
      sourceNamespace: $SOURCE_NAMESPACE
      startingCSV: $STARTING_CSV
    EOF
    Copy to Clipboard Toggle word wrap
  3. Create the CSI Nutanix Operator:

    $ oc apply -f nutanixcsioperator.yaml
    Copy to Clipboard Toggle word wrap

    Sample output

    subscription.operators.coreos.com/nutanixcsioperator created
    Copy to Clipboard Toggle word wrap

  4. Run the following command until the Operator subscription state changes to AtLatestKnown. This indicates that the Operator subscription has been created, and may take some time.

    $ oc get subscription nutanixcsioperator -n openshift-cluster-csi-drivers -o 'jsonpath={..status.state}'
    Copy to Clipboard Toggle word wrap

13.3.4. Deploying the Nutanix CSI storage driver

The Nutanix Container Storage Interface (CSI) Driver for Kubernetes provides scalable and persistent storage for stateful applications.

Note

For instructions on performing this step through the Red Hat OpenShift Container Platform, see the Installing the CSI Driver using the Operator section of the Nutanix CSI Operator document in Additional Resources.

Procedure

  1. Create a NutanixCsiStorage resource to deploy the driver:

    $ cat <<EOF | oc create -f -
    apiVersion: crd.nutanix.com/v1alpha1
    kind: NutanixCsiStorage
    metadata:
      name: nutanixcsistorage
      namespace: openshift-cluster-csi-drivers
    spec: {}
    EOF
    Copy to Clipboard Toggle word wrap

    Sample output

    snutanixcsistorage.crd.nutanix.com/nutanixcsistorage created
    Copy to Clipboard Toggle word wrap

  2. Create a Nutanix secret YAML file for the CSI storage driver:

    $ cat <<EOF | oc create -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: ntnx-secret
      namespace: openshift-cluster-csi-drivers
    stringData:
      # prism-element-ip:prism-port:admin:password
      key: <prismelement_address:prismelement_port:prismcentral_username:prismcentral_password> 
    1
    
    EOF
    Copy to Clipboard Toggle word wrap
    Note
    1
    Replace these parameters with actual values while keeping the same format.

    Sample output

    secret/nutanix-secret created
    Copy to Clipboard Toggle word wrap

13.3.5. Validating the post-installation configurations

Run the following steps to validate the configuration.

Procedure

  1. Verify that you can create a storage class:

    $ cat <<EOF | oc create -f -
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: nutanix-volume
      annotations:
        storageclass.kubernetes.io/is-default-class: 'true'
    provisioner: csi.nutanix.com
    parameters:
      csi.storage.k8s.io/fstype: ext4
      csi.storage.k8s.io/provisioner-secret-namespace: openshift-cluster-csi-drivers
      csi.storage.k8s.io/provisioner-secret-name: ntnx-secret
      storageContainer: <nutanix_storage_container> 
    1
    
      csi.storage.k8s.io/controller-expand-secret-name: ntnx-secret
      csi.storage.k8s.io/node-publish-secret-namespace: openshift-cluster-csi-drivers
      storageType: NutanixVolumes
      csi.storage.k8s.io/node-publish-secret-name: ntnx-secret
      csi.storage.k8s.io/controller-expand-secret-namespace: openshift-cluster-csi-drivers
    reclaimPolicy: Delete
    allowVolumeExpansion: true
    volumeBindingMode: Immediate
    EOF
    Copy to Clipboard Toggle word wrap
    Note
    1
    Take <nutanix_storage_container> from the Nutanix configuration; for example, SelfServiceContainer.

    Sample output

    storageclass.storage.k8s.io/nutanix-volume created
    Copy to Clipboard Toggle word wrap

  2. Verify that you can create the Nutanix persistent volume claim (PVC):

    1. Create the persistent volume claim (PVC):

      $ cat <<EOF | oc create -f -
      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: nutanix-volume-pvc
        namespace: openshift-cluster-csi-drivers
        annotations:
          volume.beta.kubernetes.io/storage-provisioner: csi.nutanix.com
          volume.kubernetes.io/storage-provisioner: csi.nutanix.com
        finalizers:
          - kubernetes.io/pvc-protection
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
        storageClassName: nutanix-volume
        volumeMode: Filesystem
      EOF
      Copy to Clipboard Toggle word wrap

      Sample output

      persistentvolumeclaim/nutanix-volume-pvc created
      Copy to Clipboard Toggle word wrap

    2. Validate that the persistent volume claim (PVC) status is Bound:

      $ oc get pvc -n openshift-cluster-csi-drivers
      Copy to Clipboard Toggle word wrap

      Sample output

      NAME                 STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS     AGE
      nutanix-volume-pvc   Bound                                        nutanix-volume   52s
      Copy to Clipboard Toggle word wrap

Nach oben
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2025 Red Hat