Chapter 2. Installing a cluster on Nutanix


In OpenShift Container Platform version 4.13, you can choose one of the following options to install a cluster on your Nutanix instance:

Using installer-provisioned infrastructure: Use the procedures in the following sections to use installer-provisioned infrastructure. Installer-provisioned infrastructure is ideal for installing in connected or disconnected network environments. The installer-provisioned infrastructure includes an installation program that provisions the underlying infrastructure for the cluster.

Using the Assisted Installer: The Assisted Installer hosted at console.redhat.com. The Assisted Installer cannot be used in disconnected environments. The Assisted Installer does not provision the underlying infrastructure for the cluster, so you must provision the infrastructure before the running the Assisted Installer. Installing with the Assisted Installer also provides integration with Nutanix, enabling autoscaling. See Installing an on-premise cluster using the Assisted Installer for additional details.

Using user-provisioned infrastructure: Complete the relevant steps outlined in the Installing a cluster on any platform documentation.

2.1. Prerequisites

  • You have reviewed details about the OpenShift Container Platform installation and update processes.
  • The installation program requires access to port 9440 on Prism Central and Prism Element. You verified that port 9440 is accessible.
  • If you use a firewall, you have met these prerequisites:

    • You confirmed that port 9440 is accessible. Control plane nodes must be able to reach Prism Central and Prism Element on port 9440 for the installation to succeed.
    • You configured the firewall to grant access to the sites that OpenShift Container Platform requires. This includes the use of Telemetry.
  • If your Nutanix environment is using the default self-signed SSL certificate, replace it with a certificate that is signed by a CA. The installation program requires a valid CA-signed certificate to access to the Prism Central API. For more information about replacing the self-signed certificate, see the Nutanix AOS Security Guide.

    If your Nutanix environment uses an internal CA to issue certificates, you must configure a cluster-wide proxy as part of the installation process. For more information, see Configuring a custom PKI.

    Important

    Use 2048-bit certificates. The installation fails if you use 4096-bit certificates with Prism Central 2022.x.

2.2. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.13, you require access to the internet to install your cluster.

You must have internet access to:

  • Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.

2.3. Internet access for Prism Central

Prism Central requires internet access to obtain the Red Hat Enterprise Linux CoreOS (RHCOS) image that is required to install the cluster. The RHCOS image for Nutanix is available at rhcos.mirror.openshift.com.

2.4. Generating a key pair for cluster node SSH access

During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.

After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.

If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.

Important

Do not skip this procedure in production environments, where disaster recovery and debugging is required.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.
  2. View the public SSH key:

    $ cat <path>/<file_name>.pub

    For example, run the following to view the ~/.ssh/id_ed25519.pub public key:

    $ cat ~/.ssh/id_ed25519.pub
  3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command.

    Note

    On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically.

    1. If the ssh-agent process is not already running for your local user, start it as a background task:

      $ eval "$(ssh-agent -s)"

      Example output

      Agent pid 31874

  4. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1
    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

2.5. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on the host you are using for installation.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space.

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar -xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

2.6. Adding Nutanix root CA certificates to your system trust

Because the installation program requires access to the Prism Central API, you must add your Nutanix trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster.

Procedure

  1. From the Prism Central web console, download the Nutanix root CA certificates.
  2. Extract the compressed file that contains the Nutanix root CA certificates.
  3. Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command:

    # cp certs/lin/* /etc/pki/ca-trust/source/anchors
  4. Update your system trust. For example, on a Fedora operating system, run the following command:

    # update-ca-trust extract

2.7. Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Nutanix.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
  • Verify that you have met the Nutanix networking requirements. For more information, see "Preparing to install on Nutanix".

Procedure

  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> 1
      1
      For <installation_directory>, specify the directory name to store the files that the installation program creates.

      When specifying the directory:

      • Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory.
      • Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

        Note

        Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command:

        $ rm -rf ~/.powervs
    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        Note

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Select nutanix as the platform to target.
      3. Enter the Prism Central domain name or IP address.
      4. Enter the port that is used to log into Prism Central.
      5. Enter the credentials that are used to log into Prism Central.

        The installation program connects to Prism Central.

      6. Select the Prism Element that will manage the OpenShift Container Platform cluster.
      7. Select the network subnet to use.
      8. Enter the virtual IP address that you configured for control plane API access.
      9. Enter the virtual IP address that you configured for cluster ingress.
      10. Enter the base domain. This base domain must be the same one that you configured in the DNS records.
      11. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records.
      12. Paste the pull secret from the Red Hat OpenShift Cluster Manager.
  2. Optional: Update one or more of the default configuration parameters in the install.config.yaml file to customize the installation.

    For more information about the parameters, see "Installation configuration parameters".

  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

2.7.1. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

Note

After installation, you cannot modify these parameters in the install-config.yaml file.

2.7.1.1. Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 2.1. Required parameters
ParameterDescriptionValues

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installation program may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters and hyphens (-), such as dev.

platform

The configuration for the specific platform upon which to perform the installation: alibabacloud, aws, baremetal, azure, gcp, ibmcloud, nutanix, openstack, ovirt, powervs, vsphere, or {}. For additional information about platform.<platform> parameters, consult the table for your specific platform that follows.

Object

pullSecret

Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io.

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}

2.7.1.2. Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Note

Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster.

Table 2.2. Network parameters
ParameterDescriptionValues

networking

The configuration for the cluster network.

Object

Note

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The Red Hat OpenShift Networking network plugin to install.

Either OpenShiftSDN or OVNKubernetes. OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24. For IBM Power Virtual Server, the default value is 192.168.0.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Note

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

2.7.1.3. Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 2.3. Optional parameters
ParameterDescriptionValues

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

capabilities

Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing.

String array

capabilities.baselineCapabilitySet

Selects an initial set of optional capabilities to enable. Valid values are None, v4.11, v4.12 and vCurrent. The default value is vCurrent.

String

capabilities.additionalEnabledCapabilities

Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet. You may specify multiple capabilities in this parameter.

String array

cpuPartitioningMode

Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section.

None or AllNodes. None is the default value.

compute

The configuration for the machines that comprise the compute nodes.

Array of MachinePool objects.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

compute: hyperthreading:

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, nutanix, openstack, ovirt, powervs, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

featureSet

Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates".

String. The name of the feature set to enable, such as TechPreviewNoUpgrade.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

controlPlane: hyperthreading:

Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

controlPlane.name

Required if you use controlPlane. The name of the machine pool.

master

controlPlane.platform

Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, nutanix, openstack, ovirt, powervs, vsphere, or {}

controlPlane.replicas

The number of control plane machines to provision.

The only supported value is 3, which is the default value.

credentialsMode

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

Note

Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content.

Note

If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint, Passthrough or Manual.

Mint, Passthrough, Manual or an empty string ("").

imageContentSources

Sources and repositories for the release-image content.

Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

imageContentSources.source

Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

String

imageContentSources.mirrors

Specify one or more repositories that may also contain the same images.

Array of strings

publish

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Internal or External. The default value is External.

Setting this field to Internal is not supported on non-cloud platforms.

Important

If the value of the field is set to Internal, the cluster will become non-functional. For more information, refer to BZ#1953035.

sshKey

The SSH key to authenticate access to your cluster machines.

Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

For example, sshKey: ssh-ed25519 AAAA...

  1. Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content.

2.7.1.4. Additional Nutanix configuration parameters

Additional Nutanix configuration parameters are described in the following table:

Table 2.4. Additional Nutanix cluster parameters
ParameterDescriptionValues

compute.platform.nutanix.categories.key

The name of a prism category key to apply to compute VMs. This parameter must be accompanied by the value parameter, and both key and value parameters must exist in Prism Central. For more information on categories, see Category management.

String

compute.platform.nutanix.categories.value

The value of a prism category key-value pair to apply to compute VMs. This parameter must be accompanied by the key parameter, and both key and value parameters must exist in Prism Central.

String

compute.platform.nutanix.project.type

The type of identifier you use to select a project for compute VMs. Projects define logical groups of user roles for managing permissions, networks, and other parameters. For more information on projects, see Projects Overview.

name or uuid

compute.platform.nutanix.project.name or compute.platform.nutanix.project.uuid

The name or UUID of a project with which compute VMs are associated. This parameter must be accompanied by the type parameter.

String

compute.platform.nutanix.bootType

The boot type that the compute machines use. You must use the Legacy boot type in OpenShift Container Platform 4.13. For more information on boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment.

Legacy, SecureBoot or UEFI. The default is Legacy.

controlPlane.platform.nutanix.categories.key

The name of a prism category key to apply to control plane VMs. This parameter must be accompanied by the value parameter, and both key and value parameters must exist in Prism Central. For more information on categories, see Category management.

String

controlPlane.platform.nutanix.categories.value

The value of a prism category key-value pair to apply to control plane VMs. This parameter must be accompanied by the key parameter, and both key and value parameters must exist in Prism Central.

String

controlPlane.platform.nutanix.project.type

The type of identifier you use to select a project for control plane VMs. Projects define logical groups of user roles for managing permissions, networks, and other parameters. For more information on projects, see Projects Overview.

name or uuid

controlPlane.platform.nutanix.project.name or controlPlane.platform.nutanix.project.uuid

The name or UUID of a project with which control plane VMs are associated. This parameter must be accompanied by the type parameter.

String

platform.nutanix.defaultMachinePlatform.categories.key

The name of a prism category key to apply to all VMs. This parameter must be accompanied by the value parameter, and both key and value parameters must exist in Prism Central. For more information on categories, see Category management.

String

platform.nutanix.defaultMachinePlatform.categories.value

The value of a prism category key-value pair to apply to all VMs. This parameter must be accompanied by the key parameter, and both key and value parameters must exist in Prism Central.

String

platform.nutanix.defaultMachinePlatform.project.type

The type of identifier you use to select a project for all VMs. Projects define logical groups of user roles for managing permissions, networks, and other parameters. For more information on projects, see Projects Overview.

name or uuid.

platform.nutanix.defaultMachinePlatform.project.name or platform.nutanix.defaultMachinePlatform.project.uuid

The name or UUID of a project with which all VMs are associated. This parameter must be accompanied by the type parameter.

String

platform.nutanix.defaultMachinePlatform.bootType

The boot type for all machines. You must use the Legacy boot type in OpenShift Container Platform 4.13. For more information on boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment.

Legacy, SecureBoot or UEFI. The default is Legacy.

platform.nutanix.apiVIP

The virtual IP (VIP) address that you configured for control plane API access.

IP address

platform.nutanix.ingressVIP

The virtual IP (VIP) address that you configured for cluster ingress.

IP address

platform.nutanix.prismCentral.endpoint.address

The Prism Central domain name or IP address.

String

platform.nutanix.prismCentral.endpoint.port

The port that is used to log into Prism Central.

String

platform.nutanix.prismCentral.password

The password for the Prism Central user name.

String

platform.nutanix.prismCentral.username

The user name that is used to log into Prism Central.

String

platform.nutanix.prismElments.endpoint.address

The Prism Element domain name or IP address. [1]

String

platform.nutanix.prismElments.endpoint.port

The port that is used to log into Prism Element.

String

platform.nutanix.prismElements.uuid

The universally unique identifier (UUID) for Prism Element.

String

platform.nutanix.subnetUUIDs

The UUID of the Prism Element network that contains the virtual IP addresses and DNS records that you configured. [2]

String

platform.nutanix.clusterOSImage

Optional: By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can override the default behavior by hosting the RHCOS image on any HTTP server and pointing the installation program to the image.

An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2

  1. The prismElements section holds a list of Prism Elements (clusters). A Prism Element encompasses all of the Nutanix resources, for example virtual machines and subnets, that are used to host the OpenShift Container Platform cluster. Only a single Prism Element is supported.
  2. Only one subnet per OpenShift Container Platform cluster is supported.

2.7.2. Sample customized install-config.yaml file for Nutanix

You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.

Important

This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.

apiVersion: v1
baseDomain: example.com 1
compute: 2
- hyperthreading: Enabled 3
  name: worker
  replicas: 3
  platform:
    nutanix: 4
      cpus: 2
      coresPerSocket: 2
      memoryMiB: 8196
      osDisk:
        diskSizeGiB: 120
      categories: 5
      - key: <category_key_name>
        value: <category_value>
controlPlane: 6
  hyperthreading: Enabled 7
  name: master
  replicas: 3
  platform:
    nutanix: 8
      cpus: 4
      coresPerSocket: 2
      memoryMiB: 16384
      osDisk:
        diskSizeGiB: 120
      categories: 9
      - key: <category_key_name>
        value: <category_value>
metadata:
  creationTimestamp: null
  name: test-cluster 10
networking:
  clusterNetwork:
    - cidr: 10.128.0.0/14
      hostPrefix: 23
  machineNetwork:
    - cidr: 10.0.0.0/16
  networkType: OVNKubernetes 11
  serviceNetwork:
    - 172.30.0.0/16
platform:
  nutanix:
    apiVIPs:
      - 10.40.142.7 12
    defaultMachinePlatform:
      bootType: Legacy
      categories: 13
      - key: <category_key_name>
        value: <category_value>
      project: 14
        type: name
        name: <project_name>
    ingressVIPs:
      - 10.40.142.8 15
    prismCentral:
      endpoint:
        address: your.prismcentral.domainname 16
        port: 9440 17
      password: <password> 18
      username: <username> 19
    prismElements:
    - endpoint:
        address: your.prismelement.domainname
        port: 9440
      uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712
    subnetUUIDs:
    - c7938dc6-7659-453e-a688-e26020c68e43
    clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20
credentialsMode: Manual
publish: External
pullSecret: '{"auths": ...}' 21
fips: false 22
sshKey: ssh-ed25519 AAAA... 23
1 10 12 15 16 17 18 19 21
Required. The installation program prompts you for this value.
2 6
The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, -, and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used.
3 7
Whether to enable or disable simultaneous multithreading, or hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

4 8
Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines.
5 9 13
Optional: Provide one or more pairs of a prism category key and a prism category value. These category key-value pairs must exist in Prism Central. You can provide separate categories to compute machines, control plane machines, or all machines.
11
The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN. The default value is OVNKubernetes.
14
Optional: Specify a project with which VMs are associated. Specify either name or uuid for the project type, and then provide the corresponding UUID or project name. You can associate projects to compute machines, control plane machines, or all machines.
20
Optional: By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can override the default behavior by hosting the RHCOS image on any HTTP server and pointing the installation program to the image.
22
Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled.
Important

OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes.

23
Optional: You can provide the sshKey value that you use to access the machines in your cluster.
Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

2.7.3. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Prerequisites

  • You have an existing install-config.yaml file.
  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    Note

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure

  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 1
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 2
      noProxy: example.com 3
    additionalTrustBundle: | 4
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5
    1
    A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2
    A proxy URL to use for creating HTTPS connections outside the cluster.
    3
    A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations.
    4
    If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
    5
    Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always. Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly.
    Note

    The installation program does not support the proxy readinessEndpoints field.

    Note

    If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example:

    $ ./openshift-install wait-for install-complete --log-level debug
  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Note

Only the Proxy object named cluster is supported, and no additional proxies can be created.

2.8. Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.

Important

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc.

Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the architecture from the Product Variant drop-down list.
  3. Select the appropriate version from the Version drop-down list.
  4. Click Download Now next to the OpenShift v4.13 Linux Client entry and save the file.
  5. Unpack the archive:

    $ tar xvf <file>
  6. Place the oc binary in a directory that is on your PATH.

    To check your PATH, execute the following command:

    $ echo $PATH

Verification

  • After you install the OpenShift CLI, it is available using the oc command:

    $ oc <command>

Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version from the Version drop-down list.
  3. Click Download Now next to the OpenShift v4.13 Windows Client entry and save the file.
  4. Unzip the archive with a ZIP program.
  5. Move the oc binary to a directory that is on your PATH.

    To check your PATH, open the command prompt and execute the following command:

    C:\> path

Verification

  • After you install the OpenShift CLI, it is available using the oc command:

    C:\> oc <command>

Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version from the Version drop-down list.
  3. Click Download Now next to the OpenShift v4.13 macOS Client entry and save the file.

    Note

    For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry.

  4. Unpack and unzip the archive.
  5. Move the oc binary to a directory on your PATH.

    To check your PATH, open a terminal and execute the following command:

    $ echo $PATH

Verification

  • After you install the OpenShift CLI, it is available using the oc command:

    $ oc <command>

2.9. Configuring IAM for Nutanix

Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets.

Prerequisites

  • You have configured the ccoctl binary.
  • You have an install-config.yaml file.

Procedure

  1. Create a YAML file that contains the credentials data in the following format:

    Credentials data format

    credentials:
    - type: basic_auth 1
      data:
        prismCentral: 2
          username: <username_for_prism_central>
          password: <password_for_prism_central>
        prismElements: 3
        - name: <name_of_prism_element>
          username: <username_for_prism_element>
          password: <password_for_prism_element>

    1
    Specify the authentication type. Only basic authentication is supported.
    2
    Specify the Prism Central credentials.
    3
    Optional: Specify the Prism Element credentials.
  2. Set a $RELEASE_IMAGE variable with the release image from your installation file by running the following command:

    $ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
  3. Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command:

    $ oc adm release extract \
      --from=$RELEASE_IMAGE \
      --credentials-requests \
      --cloud=nutanix \
      --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1
    1
    Specify the path to the directory that contains the files for the component CredentialsRequests objects. If the specified directory does not exist, this command creates it.

    Sample CredentialsRequest object

      apiVersion: cloudcredential.openshift.io/v1
      kind: CredentialsRequest
      metadata:
        annotations:
          include.release.openshift.io/self-managed-high-availability: "true"
        labels:
          controller-tools.k8s.io: "1.0"
        name: openshift-machine-api-nutanix
        namespace: openshift-cloud-credential-operator
      spec:
        providerSpec:
          apiVersion: cloudcredential.openshift.io/v1
          kind: NutanixProviderSpec
        secretRef:
          name: nutanix-credentials
          namespace: openshift-machine-api

  4. If your cluster uses cluster capabilities to disable one or more optional components, delete the CredentialsRequest custom resources for any disabled components.

    Example credrequests directory contents for OpenShift Container Platform 4.13 on Nutanix

    0000_26_cloud-controller-manager-operator_18_credentialsrequest-nutanix.yaml 1
    0000_30_machine-api-operator_00_credentials-request.yaml 2

    1
    The Cloud Controller Manager Operator CR is required.
    2
    The Machine API Operator CR is required.
  5. Use the ccoctl tool to process all of the CredentialsRequest objects in the credrequests directory by running the following command:

    $ ccoctl nutanix create-shared-secrets \
      --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \1
      --output-dir=<ccoctl_output_dir> \2
      --credentials-source-filepath=<path_to_credentials_file> 3
    1
    Specify the path to the directory that contains the files for the component CredentialsRequests objects.
    2
    Specify the directory that contains the files of the component credentials secrets, under the manifests directory. By default, the ccoctl tool creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag.
    3
    Optional: Specify the directory that contains the credentials data YAML file. By default, ccoctl expects this file to be in <home_directory>/.nutanix/credentials. To specify a different directory, use the --credentials-source-filepath flag.
  6. Edit the install-config.yaml configuration file so that the credentialsMode parameter is set to Manual.

    Example install-config.yaml configuration file

    apiVersion: v1
    baseDomain: cluster1.example.com
    credentialsMode: Manual 1
    ...

    1
    Add this line to set the credentialsMode parameter to Manual.
  7. Create the installation manifests by running the following command:

    $ openshift-install create manifests --dir <installation_directory> 1
    1
    Specify the path to the directory that contains the install-config.yaml file for your cluster.
  8. Copy the generated credential files to the target manifests directory by running the following command:

    $ cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests

Verification

  • Ensure that the appropriate secrets exist in the manifests directory.

    $ ls ./<installation_directory>/manifests

    Example output

    cluster-config.yaml
    cluster-dns-02-config.yml
    cluster-infrastructure-02-config.yml
    cluster-ingress-02-config.yml
    cluster-network-01-crd.yml
    cluster-network-02-config.yml
    cluster-proxy-01-config.yaml
    cluster-scheduler-02-config.yml
    cvo-overrides.yaml
    kube-cloud-config.yaml
    kube-system-configmap-root-ca.yaml
    machine-config-server-tls-secret.yaml
    openshift-config-secret-pull-secret.yaml
    openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml
    openshift-machine-api-nutanix-credentials-credentials.yaml

2.10. Adding config map and secret resources required for Nutanix CCM

Installations on Nutanix require additional ConfigMap and Secret resources to integrate with the Nutanix Cloud Controller Manager (CCM).

Prerequisites

  • You have created a manifests directory within your installation directory.

Procedure

  1. Navigate to the manifests directory:

    $ cd <path_to_installation_directory>/manifests
  2. Create the cloud-conf ConfigMap file with the name openshift-cloud-controller-manager-cloud-config.yaml and add the following information:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cloud-conf
      namespace: openshift-cloud-controller-manager
    data:
      cloud.conf: "{
          \"prismCentral\": {
              \"address\": \"<prism_central_FQDN/IP>\", 1
              \"port\": 9440,
                \"credentialRef\": {
                    \"kind\": \"Secret\",
                    \"name\": \"nutanix-credentials\",
                    \"namespace\": \"openshift-cloud-controller-manager\"
                }
           },
           \"topologyDiscovery\": {
               \"type\": \"Prism\",
               \"topologyCategories\": null
           },
           \"enableCustomLabeling\": true
         }"
    1
    Specify the Prism Central FQDN/IP.
  3. Verify that the file cluster-infrastructure-02-config.yml exists and has the following information:

    spec:
      cloudConfig:
        key: config
        name: cloud-provider-config

2.11. Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

Important

You can run the create cluster command of the installation program only once, during initial installation.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
  • Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.

Procedure

  • Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ 1
        --log-level=info 2
    1
    For <installation_directory>, specify the location of your customized ./install-config.yaml file.
    2
    To view different installation details, specify warn, debug, or error instead of info.

Verification

When the cluster deployment completes successfully:

  • The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user.
  • Credential information also outputs to <installation_directory>/.openshift_install.log.
Important

Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

Example output

...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s

Important
  • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

2.12. Configuring the default storage container

After you install the cluster, you must install the Nutanix CSI Operator and configure the default storage container for the cluster.

For more information, see the Nutanix documentation for installing the CSI Operator and configuring registry storage.

2.13. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console.

After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

2.14. Additional resources

2.15. Next steps

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.