Chapter 1. Preparing to install on Nutanix
Before you install an OpenShift Container Platform cluster, be sure that your Nutanix environment meets the following requirements.
1.1. Nutanix version requirements
You must install the OpenShift Container Platform cluster to a Nutanix environment that meets the following requirements.
Component | Required version |
---|---|
Nutanix AOS | 6.5.2.7 or later |
Prism Central | pc.2022.6 or later |
1.2. Environment requirements
Before you install an OpenShift Container Platform cluster, review the following Nutanix AOS environment requirements.
1.2.1. Required account privileges
The installation program requires access to a Nutanix account with the necessary permissions to deploy the cluster and to maintain the daily operation of it. The following options are available to you:
- You can use a local Prism Central user account with administrative privileges. Using a local account is the quickest way to grant access to an account with the required permissions.
- If your organization’s security policies require that you use a more restrictive set of permissions, use the permissions that are listed in the following table to create a custom Cloud Native role in Prism Central. You can then assign the role to a user account that is a member of a Prism Central authentication directory.
Consider the following when managing this user account:
- When assigning entities to the role, ensure that the user can access only the Prism Element and subnet that are required to deploy the virtual machines.
- Ensure that the user is a member of the project to which it needs to assign virtual machines.
For more information, see the Nutanix documentation about creating a Custom Cloud Native role, assigning a role, and adding a user to a project.
Example 1.1. Required permissions for creating a Custom Cloud Native role
Nutanix Object | When required | Required permissions in Nutanix API | Description |
---|---|---|---|
Categories | Always |
| Create, read, and delete categories that are assigned to the OpenShift Container Platform machines. |
Images | Always |
| Create, read, and delete the operating system images used for the OpenShift Container Platform machines. |
Virtual Machines | Always |
| Create, read, and delete the OpenShift Container Platform machines. |
Clusters | Always |
| View the Prism Element clusters that host the OpenShift Container Platform machines. |
Subnets | Always |
| View the subnets that host the OpenShift Container Platform machines. |
Projects | If you will associate a project with compute machines, control plane machines, or all machines. |
| View the projects defined in Prism Central and allow a project to be assigned to the OpenShift Container Platform machines. |
1.2.2. Cluster limits
Available resources vary between clusters. The number of possible clusters within a Nutanix environment is limited primarily by available storage space and any limitations associated with the resources that the cluster creates, and resources that you require to deploy the cluster, such a IP addresses and networks.
1.2.3. Cluster resources
A minimum of 800 GB of storage is required to use a standard cluster.
When you deploy a OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your Nutanix instance. Although these resources use 856 GB of storage, the bootstrap node is destroyed as part of the installation process.
A standard OpenShift Container Platform installation creates the following resources:
- 1 label
Virtual machines:
- 1 disk image
- 1 temporary bootstrap node
- 3 control plane nodes
- 3 compute machines
1.2.4. Networking requirements
You must use either AHV IP Address Management (IPAM) or Dynamic Host Configuration Protocol (DHCP) for the network and ensure that it is configured to provide persistent IP addresses to the cluster machines. Additionally, create the following networking resources before you install the OpenShift Container Platform cluster:
- IP addresses
- DNS records
Nutanix Flow Virtual Networking is supported for new cluster installations. To use this feature, enable Flow Virtual Networking on your AHV cluster before installing. For more information, see Flow Virtual Networking overview.
It is recommended that each OpenShift Container Platform node in the cluster have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, an NTP server prevents errors typically associated with asynchronous server clocks.
1.2.4.1. Required IP Addresses
An installer-provisioned installation requires two static virtual IP (VIP) addresses:
- A VIP address for the API is required. This address is used to access the cluster API.
- A VIP address for ingress is required. This address is used for cluster ingress traffic.
You specify these IP addresses when you install the OpenShift Container Platform cluster.
1.2.4.2. DNS records
You must create DNS records for two static IP addresses in the appropriate DNS server for the Nutanix instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name>
is the cluster name and <base_domain>
is the cluster base domain that you specify when you install the cluster.
If you use your own DNS or DHCP server, you must also create records for each node, including the bootstrap, control plane, and compute nodes.
A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>.
.
Component | Record | Description |
---|---|---|
API VIP |
| This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. |
Ingress VIP |
| A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. |
1.3. Configuring the Cloud Credential Operator utility
The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). To install a cluster on Nutanix, you must set the CCO to manual
mode as part of the installation process.
To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility (ccoctl
) binary.
The ccoctl
utility is a Linux binary that must run in a Linux environment.
Prerequisites
- You have access to an OpenShift Container Platform account with cluster administrator access.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Set a variable for the OpenShift Container Platform release image by running the following command:
$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Obtain the CCO container image from the OpenShift Container Platform release image by running the following command:
$ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE -a ~/.pull-secret)
NoteEnsure that the architecture of the
$RELEASE_IMAGE
matches the architecture of the environment in which you will use theccoctl
tool.Extract the
ccoctl
binary from the CCO container image within the OpenShift Container Platform release image by running the following command:$ oc image extract $CCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \1 -a ~/.pull-secret
- 1
- For
<rhel_version>
, specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified,ccoctl.rhel8
is used by default. The following values are valid:-
rhel8
: Specify this value for hosts that use RHEL 8. -
rhel9
: Specify this value for hosts that use RHEL 9.
-
Change the permissions to make
ccoctl
executable by running the following command:$ chmod 775 ccoctl.<rhel_version>
Verification
To verify that
ccoctl
is ready to use, display the help file. Use a relative file name when you run the command, for example:$ ./ccoctl.rhel9
Example output
OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command.
Additional resources