Chapter 1. Installing on vSphere


1.1. Installing a cluster on vSphere

In OpenShift Container Platform version 4.6, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure.

1.1.1. Prerequisites

  • Provision persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes.
  • Review details about the OpenShift Container Platform installation and update processes.
  • The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible.
  • If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed.
  • If you use a firewall, you must configure it to allow the sites that your cluster requires access to.

    Note

    Be sure to also review this site list if you are configuring a proxy.

1.1.2. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.6, you require access to the Internet to install your cluster.

You must have Internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry.

1.1.3. VMware vSphere infrastructure requirements

You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use.

Table 1.1. Minimum supported vSphere version for VMware components
ComponentMinimum supported versionsDescription

Hypervisor

vSphere 6.5 and later with HW version 13

This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list.

Storage with in-tree drivers

vSphere 6.5 and later

This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform.

Optional: Networking (NSX-T)

vSphere 6.5U3 or vSphere 6.7U2 and later

vSphere 6.5U3 or vSphere 6.7U2+ are required for OpenShift Container Platform. VMware’s NSX Container Plug-in (NCP) 3.0.2 is certified with OpenShift Container Platform 4.6 and NSX-T 3.x+.

If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform.

Important

You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation.

1.1.4. Network connectivity requirements

You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate.

Review the following details about the required network ports.

Table 1.2. Ports used for all-machine to all-machine communications
ProtocolPortDescription

ICMP

N/A

Network reachability tests

TCP

1936

Metrics

9000-9999

Host level services, including the node exporter on ports 9100-9101 and the Cluster Version Operator on port 9099.

10250-10259

The default ports that Kubernetes reserves

10256

openshift-sdn

UDP

4789

virtual extensible LAN (VXLAN)

6081

Geneve

9000-9999

Host level services, including the node exporter on ports 9100-9101.

500

IPsec IKE packets

4500

IPsec NAT-T packets

TCP/UDP

30000-32767

Kubernetes node port

ESP

N/A

IPsec Encapsulating Security Payload (ESP)

Table 1.3. Ports used for all-machine to control plane communications
ProtocolPortDescription

TCP

6443

Kubernetes API

Table 1.4. Ports used for control plane machine to control plane machine communications
ProtocolPortDescription

TCP

2379-2380

etcd server and peer ports

1.1.5. vCenter requirements

Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment.

Required vCenter account privileges

To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions.

If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges.

An additional role is required if the installation program is to create a vSphere virtual machine folder.

Example 1.1. Roles and privileges required for installation

vSphere object for roleWhen requiredRequired privileges

vSphere vCenter

Always

Cns.Searchable
InventoryService.Tagging.AttachTag
InventoryService.Tagging.CreateCategory
InventoryService.Tagging.CreateTag
InventoryService.Tagging.DeleteCategory
InventoryService.Tagging.DeleteTag
InventoryService.Tagging.EditCategory
InventoryService.Tagging.EditTag
Sessions.ValidateSession
StorageProfile.View

vSphere vCenter Cluster

Always

Host.Config.Storage
Resource.AssignVMToPool
VApp.AssignResourcePool
VApp.Import
VirtualMachine.Config.AddNewDisk

vSphere Datastore

Always

Datastore.AllocateSpace
Datastore.Browse
Datastore.FileManagement

vSphere Port Group

Always

Network.Assign

Virtual Machine Folder

Always

Resource.AssignVMToPool
VApp.Import
VirtualMachine.Config.AddExistingDisk
VirtualMachine.Config.AddNewDisk
VirtualMachine.Config.AddRemoveDevice
VirtualMachine.Config.AdvancedConfig
VirtualMachine.Config.Annotation
VirtualMachine.Config.CPUCount
VirtualMachine.Config.DiskExtend
VirtualMachine.Config.DiskLease
VirtualMachine.Config.EditDevice
VirtualMachine.Config.Memory
VirtualMachine.Config.RemoveDisk
VirtualMachine.Config.Rename
VirtualMachine.Config.ResetGuestInfo
VirtualMachine.Config.Resource
VirtualMachine.Config.Settings
VirtualMachine.Config.UpgradeVirtualHardware
VirtualMachine.Interact.GuestControl
VirtualMachine.Interact.PowerOff
VirtualMachine.Interact.PowerOn
VirtualMachine.Interact.Reset
VirtualMachine.Inventory.Create
VirtualMachine.Inventory.CreateFromExisting
VirtualMachine.Inventory.Delete
VirtualMachine.Provisioning.Clone

vSphere vCenter Datacenter

If the installation program creates the virtual machine folder

Resource.AssignVMToPool
VApp.Import
VirtualMachine.Config.AddExistingDisk
VirtualMachine.Config.AddNewDisk
VirtualMachine.Config.AddRemoveDevice
VirtualMachine.Config.AdvancedConfig
VirtualMachine.Config.Annotation
VirtualMachine.Config.CPUCount
VirtualMachine.Config.DiskExtend
VirtualMachine.Config.DiskLease
VirtualMachine.Config.EditDevice
VirtualMachine.Config.Memory
VirtualMachine.Config.RemoveDisk
VirtualMachine.Config.Rename
VirtualMachine.Config.ResetGuestInfo
VirtualMachine.Config.Resource
VirtualMachine.Config.Settings
VirtualMachine.Config.UpgradeVirtualHardware
VirtualMachine.Interact.GuestControl
VirtualMachine.Interact.PowerOff
VirtualMachine.Interact.PowerOn
VirtualMachine.Interact.Reset
VirtualMachine.Inventory.Create
VirtualMachine.Inventory.CreateFromExisting
VirtualMachine.Inventory.Delete
VirtualMachine.Provisioning.Clone
Folder.Create
Folder.Delete

Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder.

Example 1.2. Required permissions and propagation settings

vSphere objectFolder typePropagate to childrenPermissions required

vSphere vCenter

Always

False

Listed required privileges

vSphere vCenter Datacenter

Existing folder

False

ReadOnly permission

Installation program creates the folder

True

Listed required privileges

vSphere vCenter Cluster

Always

True

Listed required privileges

vSphere vCenter Datastore

Always

False

Listed required privileges

vSphere Switch

Always

False

ReadOnly permission

vSphere Port Group

Always

False

Listed required privileges

vSphere vCenter Virtual Machine Folder

Existing folder

True

Listed required privileges

For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation.

Using OpenShift Container Platform with vMotion
Important

OpenShift Container Platform generally supports compute-only vMotion. Using Storage vMotion can cause issues and is not supported.

If you are using vSphere volumes in your pods, migrating a VM across datastores either manually or through Storage vMotion causes invalid references within OpenShift Container Platform persistent volume (PV) objects. These references prevent affected pods from starting up and can result in data loss.

Similarly, OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs.

Cluster resources

When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance.

A standard OpenShift Container Platform installation creates the following vCenter resources:

  • 1 Folder
  • 1 Tag category
  • 1 Tag
  • Virtual machines:

    • 1 template
    • 1 temporary bootstrap node
    • 3 control plane nodes
    • 3 compute machines

Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster.

If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage.

Cluster limits

Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks.

Networking requirements

You must use DHCP for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines.

Note

Persistent IP addresses are not available before the installation begins. Allocate a DHCP range and, after installation, manually replace the allocation with the persistent IP addresses.

Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster:

Note

It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents.

Required IP Addresses

An installer-provisioned vSphere installation requires these static IP addresses:

  • The API address is used to access the cluster API.
  • The Ingress address is used for cluster ingress traffic.
  • The control plane node addresses are used when upgrading a cluster from version 4.5 to 4.6.

You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster.

DNS records

You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>..

Table 1.5. Required DNS records
ComponentRecordDescription

API VIP

api.<cluster_name>.<base_domain>.

This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster.

Ingress VIP

*.apps.<cluster_name>.<base_domain>.

A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster.

1.1.6. Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues.

Note

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the key is added to the core user’s ~/.ssh/authorized_keys list.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' \
        -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.

    Running this command generates an SSH key that does not require a password in the location that you specified.

    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. Start the ssh-agent process as a background task:

    $ eval "$(ssh-agent -s)"

    Example output

    Agent pid 31874

    Note

    If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  3. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

1.1.7. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

1.1.8. Adding vCenter root CA certificates to your system trust

Because the installation program requires access to your vCenter’s API, you must add your vCenter’s trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster.

Procedure

  1. From the vCenter home page, download the vCenter’s root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads.
  2. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure:

    certs
    ├── lin
    │   ├── 108f4d17.0
    │   ├── 108f4d17.r1
    │   ├── 7e757f6a.0
    │   ├── 8e4f8471.0
    │   └── 8e4f8471.r0
    ├── mac
    │   ├── 108f4d17.0
    │   ├── 108f4d17.r1
    │   ├── 7e757f6a.0
    │   ├── 8e4f8471.0
    │   └── 8e4f8471.r0
    └── win
        ├── 108f4d17.0.crt
        ├── 108f4d17.r1.crl
        ├── 7e757f6a.0.crt
        ├── 8e4f8471.0.crt
        └── 8e4f8471.r0.crl
    
    3 directories, 15 files
  3. Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command:

    # cp certs/lin/* /etc/pki/ca-trust/source/anchors
  4. Update your system trust. For example, on a Fedora operating system, run the following command:

    # update-ca-trust extract

1.1.9. Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

Important

You can run the create cluster command of the installation program only once, during initial installation.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ 1
        --log-level=info 2
    1
    For <installation_directory>, specify the directory name to store the files that the installation program creates.
    2
    To view different installation details, specify warn, debug, or error instead of info.
    Important

    Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    Provide values at the prompts:

    1. Optional: Select an SSH key to use to access your cluster machines.

      Note

      For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

    2. Select vsphere as the platform to target.
    3. Specify the name of your vCenter instance.
    4. Specify the user name and password for the vCenter account that has the required permissions to create the cluster.

      The installation program connects to your vCenter instance.

    5. Select the datacenter in your vCenter instance to connect to.
    6. Select the default vCenter datastore to use.

      Note

      Datastore and cluster names cannot exceed 60 characters; therefore, ensure the combined string length does not exceed the 60 character limit.

    7. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool.
    8. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured.
    9. Enter the virtual IP address that you configured for control plane API access.
    10. Enter the virtual IP address that you configured for cluster ingress.
    11. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured.
    12. Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured.

      Note

      Datastore and cluster names cannot exceed 60 characters; therefore, ensure the combined string length does not exceed the 60 character limit.

    13. Paste the pull secret from the Red Hat OpenShift Cluster Manager.

    + When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.

    + .Example output

...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
INFO Time elapsed: 36m22s

+

Note

The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds.

+

Important
  • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

+

Important

You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

1.1.10. Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.

Important

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.6. Download and install the new version of oc.

1.1.10.1. Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Linux Client entry and save the file.
  4. Unpack the archive:

    $ tar xvzf <file>
  5. Place the oc binary in a directory that is on your PATH.

    To check your PATH, execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.1.10.2. Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Windows Client entry and save the file.
  4. Unzip the archive with a ZIP program.
  5. Move the oc binary to a directory that is on your PATH.

    To check your PATH, open the command prompt and execute the following command:

    C:\> path

After you install the OpenShift CLI, it is available using the oc command:

C:\> oc <command>

1.1.10.3. Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 MacOSX Client entry and save the file.
  4. Unpack and unzip the archive.
  5. Move the oc binary to a directory on your PATH.

    To check your PATH, open a terminal and execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.1.11. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

1.1.12. Creating registry storage

After you install the cluster, you must create storage for the registry Operator.

1.1.12.1. Image registry removed during installation

On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed. This allows openshift-installer to complete installations on these platform types.

After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed.

Note

The Prometheus console provides an ImageRegistryRemoved alert, for example:

"Image Registry has been removed. ImageStreamTags, BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io."

1.1.12.2. Image registry storage configuration

The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available.

Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters.

Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades.

1.1.12.2.1. Configuring registry storage for VMware vSphere

As a cluster administrator, following installation you must configure your registry to use storage.

Prerequisites

  • Cluster administrator permissions.
  • A cluster on VMware vSphere.
  • Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage.

    Important

    OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required.

  • Must have "100Gi" capacity.
Important

Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended.

Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components.

Procedure

  1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource.

    Note

    When using shared storage, review your security settings to prevent outside access.

  2. Verify that you do not have a registry pod:

    $ oc get pod -n openshift-image-registry
    Note

    If the storage type is emptyDIR, the replica number cannot be greater than 1.

  3. Check the registry configuration:

    $ oc edit configs.imageregistry.operator.openshift.io

    Example output

    storage:
      pvc:
        claim: 1

    1
    Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC.
  4. Check the clusteroperator status:

    $ oc get clusteroperator image-registry
1.1.12.2.2. Configuring block registry storage for VMware vSphere

To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy.

Important

Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica.

Procedure

  1. To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica:

    $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}'
  2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode.

    1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object:

      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: image-registry-storage 1
        namespace: openshift-image-registry 2
      spec:
        accessModes:
        - ReadWriteOnce 3
        resources:
          requests:
            storage: 100Gi 4
      1
      A unique name that represents the PersistentVolumeClaim object.
      2
      The namespace for the PersistentVolumeClaim object, which is openshift-image-registry.
      3
      The access mode of the persistent volume claim. With ReadWriteOnce, the volume can be mounted with read and write permissions by a single node.
      4
      The size of the persistent volume claim.
    2. Create the PersistentVolumeClaim object from the file:

      $ oc create -f pvc.yaml -n openshift-image-registry
  3. Edit the registry configuration so that it references the correct PVC:

    $ oc edit config.imageregistry.operator.openshift.io -o yaml

    Example output

    storage:
      pvc:
        claim: 1

    1
    Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC.

For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere.

1.1.13. Backing up VMware vSphere volumes

OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information.

Procedure

To create a backup of persistent volumes:

  1. Stop the application that is using the persistent volume.
  2. Clone the persistent volume.
  3. Restart the application.
  4. Create a backup of the cloned volume.
  5. Delete the cloned volume.

1.1.14. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.6, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

1.1.15. Next steps

1.2. Installing a cluster on vSphere with customizations

In OpenShift Container Platform version 4.6, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster.

1.2.1. Prerequisites

  • Provision persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes.
  • Review details about the OpenShift Container Platform installation and update processes.
  • The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible.
  • If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed.
  • If you use a firewall, you must configure it to allow the sites that your cluster requires access to.

    Note

    Be sure to also review this site list if you are configuring a proxy.

1.2.2. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.6, you require access to the Internet to install your cluster.

You must have Internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry.

1.2.3. VMware vSphere infrastructure requirements

You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use.

Table 1.6. Minimum supported vSphere version for VMware components
ComponentMinimum supported versionsDescription

Hypervisor

vSphere 6.5 and later with HW version 13

This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list.

Storage with in-tree drivers

vSphere 6.5 and later

This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform.

Optional: Networking (NSX-T)

vSphere 6.5U3 or vSphere 6.7U2 and later

vSphere 6.5U3 or vSphere 6.7U2+ are required for OpenShift Container Platform. VMware’s NSX Container Plug-in (NCP) 3.0.2 is certified with OpenShift Container Platform 4.6 and NSX-T 3.x+.

If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform.

Important

You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation.

1.2.4. Network connectivity requirements

You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate.

Review the following details about the required network ports.

Table 1.7. Ports used for all-machine to all-machine communications
ProtocolPortDescription

ICMP

N/A

Network reachability tests

TCP

1936

Metrics

9000-9999

Host level services, including the node exporter on ports 9100-9101 and the Cluster Version Operator on port 9099.

10250-10259

The default ports that Kubernetes reserves

10256

openshift-sdn

UDP

4789

virtual extensible LAN (VXLAN)

6081

Geneve

9000-9999

Host level services, including the node exporter on ports 9100-9101.

500

IPsec IKE packets

4500

IPsec NAT-T packets

TCP/UDP

30000-32767

Kubernetes node port

ESP

N/A

IPsec Encapsulating Security Payload (ESP)

Table 1.8. Ports used for all-machine to control plane communications
ProtocolPortDescription

TCP

6443

Kubernetes API

Table 1.9. Ports used for control plane machine to control plane machine communications
ProtocolPortDescription

TCP

2379-2380

etcd server and peer ports

1.2.5. vCenter requirements

Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment.

Required vCenter account privileges

To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions.

If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges.

An additional role is required if the installation program is to create a vSphere virtual machine folder.

Example 1.3. Roles and privileges required for installation

vSphere object for roleWhen requiredRequired privileges

vSphere vCenter

Always

Cns.Searchable
InventoryService.Tagging.AttachTag
InventoryService.Tagging.CreateCategory
InventoryService.Tagging.CreateTag
InventoryService.Tagging.DeleteCategory
InventoryService.Tagging.DeleteTag
InventoryService.Tagging.EditCategory
InventoryService.Tagging.EditTag
Sessions.ValidateSession
StorageProfile.View

vSphere vCenter Cluster

Always

Host.Config.Storage
Resource.AssignVMToPool
VApp.AssignResourcePool
VApp.Import
VirtualMachine.Config.AddNewDisk

vSphere Datastore

Always

Datastore.AllocateSpace
Datastore.Browse
Datastore.FileManagement

vSphere Port Group

Always

Network.Assign

Virtual Machine Folder

Always

Resource.AssignVMToPool
VApp.Import
VirtualMachine.Config.AddExistingDisk
VirtualMachine.Config.AddNewDisk
VirtualMachine.Config.AddRemoveDevice
VirtualMachine.Config.AdvancedConfig
VirtualMachine.Config.Annotation
VirtualMachine.Config.CPUCount
VirtualMachine.Config.DiskExtend
VirtualMachine.Config.DiskLease
VirtualMachine.Config.EditDevice
VirtualMachine.Config.Memory
VirtualMachine.Config.RemoveDisk
VirtualMachine.Config.Rename
VirtualMachine.Config.ResetGuestInfo
VirtualMachine.Config.Resource
VirtualMachine.Config.Settings
VirtualMachine.Config.UpgradeVirtualHardware
VirtualMachine.Interact.GuestControl
VirtualMachine.Interact.PowerOff
VirtualMachine.Interact.PowerOn
VirtualMachine.Interact.Reset
VirtualMachine.Inventory.Create
VirtualMachine.Inventory.CreateFromExisting
VirtualMachine.Inventory.Delete
VirtualMachine.Provisioning.Clone

vSphere vCenter Datacenter

If the installation program creates the virtual machine folder

Resource.AssignVMToPool
VApp.Import
VirtualMachine.Config.AddExistingDisk
VirtualMachine.Config.AddNewDisk
VirtualMachine.Config.AddRemoveDevice
VirtualMachine.Config.AdvancedConfig
VirtualMachine.Config.Annotation
VirtualMachine.Config.CPUCount
VirtualMachine.Config.DiskExtend
VirtualMachine.Config.DiskLease
VirtualMachine.Config.EditDevice
VirtualMachine.Config.Memory
VirtualMachine.Config.RemoveDisk
VirtualMachine.Config.Rename
VirtualMachine.Config.ResetGuestInfo
VirtualMachine.Config.Resource
VirtualMachine.Config.Settings
VirtualMachine.Config.UpgradeVirtualHardware
VirtualMachine.Interact.GuestControl
VirtualMachine.Interact.PowerOff
VirtualMachine.Interact.PowerOn
VirtualMachine.Interact.Reset
VirtualMachine.Inventory.Create
VirtualMachine.Inventory.CreateFromExisting
VirtualMachine.Inventory.Delete
VirtualMachine.Provisioning.Clone
Folder.Create
Folder.Delete

Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder.

Example 1.4. Required permissions and propagation settings

vSphere objectFolder typePropagate to childrenPermissions required

vSphere vCenter

Always

False

Listed required privileges

vSphere vCenter Datacenter

Existing folder

False

ReadOnly permission

Installation program creates the folder

True

Listed required privileges

vSphere vCenter Cluster

Always

True

Listed required privileges

vSphere vCenter Datastore

Always

False

Listed required privileges

vSphere Switch

Always

False

ReadOnly permission

vSphere Port Group

Always

False

Listed required privileges

vSphere vCenter Virtual Machine Folder

Existing folder

True

Listed required privileges

For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation.

Using OpenShift Container Platform with vMotion
Important

OpenShift Container Platform generally supports compute-only vMotion. Using Storage vMotion can cause issues and is not supported.

If you are using vSphere volumes in your pods, migrating a VM across datastores either manually or through Storage vMotion causes invalid references within OpenShift Container Platform persistent volume (PV) objects. These references prevent affected pods from starting up and can result in data loss.

Similarly, OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs.

Cluster resources

When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance.

A standard OpenShift Container Platform installation creates the following vCenter resources:

  • 1 Folder
  • 1 Tag category
  • 1 Tag
  • Virtual machines:

    • 1 template
    • 1 temporary bootstrap node
    • 3 control plane nodes
    • 3 compute machines

Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster.

If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage.

Cluster limits

Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks.

Networking requirements

You must use DHCP for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines.

Note

Persistent IP addresses are not available before the installation begins. Allocate a DHCP range and, after installation, manually replace the allocation with the persistent IP addresses.

Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster:

Note

It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents.

Required IP Addresses

An installer-provisioned vSphere installation requires these static IP addresses:

  • The API address is used to access the cluster API.
  • The Ingress address is used for cluster ingress traffic.
  • The control plane node addresses are used when upgrading a cluster from version 4.5 to 4.6.

You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster.

DNS records

You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>..

Table 1.10. Required DNS records
ComponentRecordDescription

API VIP

api.<cluster_name>.<base_domain>.

This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster.

Ingress VIP

*.apps.<cluster_name>.<base_domain>.

A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster.

1.2.6. Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues.

Note

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the key is added to the core user’s ~/.ssh/authorized_keys list.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' \
        -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.

    Running this command generates an SSH key that does not require a password in the location that you specified.

    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. Start the ssh-agent process as a background task:

    $ eval "$(ssh-agent -s)"

    Example output

    Agent pid 31874

    Note

    If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  3. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

1.2.7. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

1.2.8. Adding vCenter root CA certificates to your system trust

Because the installation program requires access to your vCenter’s API, you must add your vCenter’s trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster.

Procedure

  1. From the vCenter home page, download the vCenter’s root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads.
  2. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure:

    certs
    ├── lin
    │   ├── 108f4d17.0
    │   ├── 108f4d17.r1
    │   ├── 7e757f6a.0
    │   ├── 8e4f8471.0
    │   └── 8e4f8471.r0
    ├── mac
    │   ├── 108f4d17.0
    │   ├── 108f4d17.r1
    │   ├── 7e757f6a.0
    │   ├── 8e4f8471.0
    │   └── 8e4f8471.r0
    └── win
        ├── 108f4d17.0.crt
        ├── 108f4d17.r1.crl
        ├── 7e757f6a.0.crt
        ├── 8e4f8471.0.crt
        └── 8e4f8471.r0.crl
    
    3 directories, 15 files
  3. Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command:

    # cp certs/lin/* /etc/pki/ca-trust/source/anchors
  4. Update your system trust. For example, on a Fedora operating system, run the following command:

    # update-ca-trust extract

1.2.9. Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on VMware vSphere.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> 1
      1
      For <installation_directory>, specify the directory name to store the files that the installation program creates.
      Important

      Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        Note

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Select vsphere as the platform to target.
      3. Specify the name of your vCenter instance.
      4. Specify the user name and password for the vCenter account that has the required permissions to create the cluster.

        The installation program connects to your vCenter instance.

      5. Select the datacenter in your vCenter instance to connect to.
      6. Select the default vCenter datastore to use.
      7. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool.
      8. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured.
      9. Enter the virtual IP address that you configured for control plane API access.
      10. Enter the virtual IP address that you configured for cluster ingress.
      11. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured.
      12. Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured.
      13. Paste the pull secret from the Red Hat OpenShift Cluster Manager.
  2. Modify the install-config.yaml file. You can find more information about the available parameters in the Installation configuration parameters section.
  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

1.2.9.1. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

Note

After installation, you cannot modify these parameters in the install-config.yaml file.

Important

The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.

1.2.9.1.1. Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 1.11. Required parameters
ParameterDescriptionValues

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installer may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters and hyphens (-), such as dev.

platform

The configuration for the specific platform upon which to perform the installation: aws, baremetal, azure, openstack, ovirt, vsphere. For additional information about platform.<platform> parameters, consult the following table for your specific platform.

Object

pullSecret

Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io.

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}
1.2.9.1.2. Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Table 1.12. Network parameters
ParameterDescriptionValues

networking

The configuration for the cluster network.

Object

Note

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The cluster network provider Container Network Interface (CNI) plug-in to install.

Either OpenShiftSDN or OVNKubernetes. The default value is OpenShiftSDN.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Note

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

1.2.9.1.3. Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 1.13. Optional parameters
ParameterDescriptionValues

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

compute

The configuration for the machines that comprise the compute nodes.

Array of machine-pool objects. For details, see the following "Machine-pool" table.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default).

String

compute.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

aws, azure, gcp, openstack, ovirt, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects. For details, see the following "Machine-pool" table.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default).

String

controlPlane.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

controlPlane.name

Required if you use controlPlane. The name of the machine pool.

master

controlPlane.platform

Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

aws, azure, gcp, openstack, ovirt, vsphere, or {}

controlPlane.replicas

The number of control plane machines to provision.

The only supported value is 3, which is the default value.

credentialsMode

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

Note

Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content.

Mint, Passthrough, Manual, or an empty string ("").

fips

Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

Important

The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

Note

If you are using Azure File storage, you cannot enable FIPS mode.

false or true

imageContentSources

Sources and repositories for the release-image content.

Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

imageContentSources.source

Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

String

imageContentSources.mirrors

Specify one or more repositories that may also contain the same images.

Array of strings

publish

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Internal or External. The default value is External.

Setting this field to Internal is not supported on non-cloud platforms.

Important

If the value of the field is set to Internal, the cluster will become non-functional. For more information, refer to BZ#1953035.

sshKey

The SSH key or keys to authenticate access your cluster machines.

Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

One or more keys. For example:

sshKey:
  <key1>
  <key2>
  <key3>
1.2.9.1.4. Additional VMware vSphere configuration parameters

Additional VMware vSphere configuration parameters are described in the following table:

Table 1.14. Additional VMware vSphere cluster parameters
ParameterDescriptionValues

platform.vsphere.vCenter

The fully-qualified hostname or IP address of the vCenter server.

String

platform.vsphere.username

The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere.

String

platform.vsphere.password

The password for the vCenter user name.

String

platform.vsphere.datacenter

The name of the datacenter to use in the vCenter instance.

String

platform.vsphere.defaultDatastore

The name of the default datastore to use for provisioning volumes.

String

platform.vsphere.folder

Optional. The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the datacenter virtual machine folder.

String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name>.

platform.vsphere.network

The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured.

String

platform.vsphere.cluster

The vCenter cluster to install the OpenShift Container Platform cluster in.

String

platform.vsphere.apiVIP

The virtual IP (VIP) address that you configured for control plane API access.

An IP address, for example 128.0.0.1.

platform.vsphere.ingressVIP

The virtual IP (VIP) address that you configured for cluster ingress.

An IP address, for example 128.0.0.1.

1.2.9.1.5. Optional VMware vSphere machine pool configuration parameters

Optional VMware vSphere machine pool configuration parameters are described in the following table:

Table 1.15. Optional VMware vSphere machine pool parameters
ParameterDescriptionValues

platform.vsphere.clusterOSImage

The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network.

An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova.

platform.vsphere.osDisk.diskSizeGB

The size of the disk in gigabytes.

Integer

platform.vsphere.cpus

The total number of virtual processor cores to assign a virtual machine.

Integer

platform.vsphere.coresPerSocket

The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus/platform.vsphere.coresPerSocket. The default value is 1

Integer

platform.vsphere.memoryMB

The size of a virtual machine’s memory in megabytes.

Integer

1.2.9.2. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster

You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.

apiVersion: v1
baseDomain: example.com 1
compute: 2
- hyperthreading: Enabled 3
  name: worker
  replicas: 3
  platform:
    vsphere: 4
      cpus: 2
      coresPerSocket: 2
      memoryMB: 8192
      osDisk:
        diskSizeGB: 120
controlPlane: 5
  hyperthreading: Enabled 6
  name: master
  replicas: 3
  platform:
    vsphere: 7
      cpus: 4
      coresPerSocket: 2
      memoryMB: 16384
      osDisk:
        diskSizeGB: 120
metadata:
  name: cluster 8
platform:
  vsphere:
    vcenter: your.vcenter.server
    username: username
    password: password
    datacenter: datacenter
    defaultDatastore: datastore
    folder: folder
    network: VM_Network
    cluster: vsphere_cluster_name 9
    apiVIP: api_vip
    ingressVIP: ingress_vip
fips: false
pullSecret: '{"auths": ...}'
sshKey: 'ssh-ed25519 AAAA...'
1
The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name.
2 5
The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, -, and the first line of the controlPlane section must not. Only one control plane pool is used.
3 6
Whether to enable or disable simultaneous multithreading, or hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading.

4 7
Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines.
8
The cluster name that you specified in your DNS records.
9
The vSphere cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool.

1.2.9.3. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Prerequisites

  • You have an existing install-config.yaml file.
  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    Note

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure

  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 1
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 2
      noProxy: example.com 3
    additionalTrustBundle: | 4
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    ...
    1
    A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2
    A proxy URL to use for creating HTTPS connections outside the cluster.
    3
    A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations. You must include vCenter’s IP address and the IP range that you use for its machines.
    4
    If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
    Note

    The installation program does not support the proxy readinessEndpoints field.

  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Note

Only the Proxy object named cluster is supported, and no additional proxies can be created.

1.2.10. Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

Important

You can run the create cluster command of the installation program only once, during initial installation.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ 1
        --log-level=info 2
    1
    For <installation_directory>, specify the location of your customized ./install-config.yaml file.
    2
    To view different installation details, specify warn, debug, or error instead of info.

    When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.

...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
INFO Time elapsed: 36m22s

+

Note

The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds.

+

Important
  • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

+

Important

You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

1.2.11. Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.

Important

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.6. Download and install the new version of oc.

1.2.11.1. Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Linux Client entry and save the file.
  4. Unpack the archive:

    $ tar xvzf <file>
  5. Place the oc binary in a directory that is on your PATH.

    To check your PATH, execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.2.11.2. Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Windows Client entry and save the file.
  4. Unzip the archive with a ZIP program.
  5. Move the oc binary to a directory that is on your PATH.

    To check your PATH, open the command prompt and execute the following command:

    C:\> path

After you install the OpenShift CLI, it is available using the oc command:

C:\> oc <command>

1.2.11.3. Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 MacOSX Client entry and save the file.
  4. Unpack and unzip the archive.
  5. Move the oc binary to a directory on your PATH.

    To check your PATH, open a terminal and execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.2.12. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

1.2.13. Creating registry storage

After you install the cluster, you must create storage for the registry Operator.

1.2.13.1. Image registry removed during installation

On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed. This allows openshift-installer to complete installations on these platform types.

After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed.

Note

The Prometheus console provides an ImageRegistryRemoved alert, for example:

"Image Registry has been removed. ImageStreamTags, BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io."

1.2.13.2. Image registry storage configuration

The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available.

Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters.

Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades.

1.2.13.2.1. Configuring registry storage for VMware vSphere

As a cluster administrator, following installation you must configure your registry to use storage.

Prerequisites

  • Cluster administrator permissions.
  • A cluster on VMware vSphere.
  • Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage.

    Important

    OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required.

  • Must have "100Gi" capacity.
Important

Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended.

Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components.

Procedure

  1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource.

    Note

    When using shared storage, review your security settings to prevent outside access.

  2. Verify that you do not have a registry pod:

    $ oc get pod -n openshift-image-registry
    Note

    If the storage type is emptyDIR, the replica number cannot be greater than 1.

  3. Check the registry configuration:

    $ oc edit configs.imageregistry.operator.openshift.io

    Example output

    storage:
      pvc:
        claim: 1

    1
    Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC.
  4. Check the clusteroperator status:

    $ oc get clusteroperator image-registry
1.2.13.2.2. Configuring block registry storage for VMware vSphere

To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy.

Important

Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica.

Procedure

  1. To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica:

    $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}'
  2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode.

    1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object:

      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: image-registry-storage 1
        namespace: openshift-image-registry 2
      spec:
        accessModes:
        - ReadWriteOnce 3
        resources:
          requests:
            storage: 100Gi 4
      1
      A unique name that represents the PersistentVolumeClaim object.
      2
      The namespace for the PersistentVolumeClaim object, which is openshift-image-registry.
      3
      The access mode of the persistent volume claim. With ReadWriteOnce, the volume can be mounted with read and write permissions by a single node.
      4
      The size of the persistent volume claim.
    2. Create the PersistentVolumeClaim object from the file:

      $ oc create -f pvc.yaml -n openshift-image-registry
  3. Edit the registry configuration so that it references the correct PVC:

    $ oc edit config.imageregistry.operator.openshift.io -o yaml

    Example output

    storage:
      pvc:
        claim: 1

    1
    Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC.

For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere.

1.2.14. Backing up VMware vSphere volumes

OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information.

Procedure

To create a backup of persistent volumes:

  1. Stop the application that is using the persistent volume.
  2. Clone the persistent volume.
  3. Restart the application.
  4. Create a backup of the cloned volume.
  5. Delete the cloned volume.

1.2.15. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.6, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

1.2.16. Next steps

1.3. Installing a cluster on vSphere with network customizations

In OpenShift Container Platform version 4.6, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster.

You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster.

1.3.1. Prerequisites

  • Provision persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes.
  • Review details about the OpenShift Container Platform installation and update processes.
  • The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible.
  • If you use a firewall, confirm with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed.
  • If you use a firewall, you must configure it to allow the sites that your cluster requires access to.

    Note

    Be sure to also review this site list if you are configuring a proxy.

1.3.2. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.6, you require access to the Internet to install your cluster.

You must have Internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry.

1.3.3. VMware vSphere infrastructure requirements

You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use.

Table 1.16. Minimum supported vSphere version for VMware components
ComponentMinimum supported versionsDescription

Hypervisor

vSphere 6.5 and later with HW version 13

This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list.

Storage with in-tree drivers

vSphere 6.5 and later

This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform.

Optional: Networking (NSX-T)

vSphere 6.5U3 or vSphere 6.7U2 and later

vSphere 6.5U3 or vSphere 6.7U2+ are required for OpenShift Container Platform. VMware’s NSX Container Plug-in (NCP) 3.0.2 is certified with OpenShift Container Platform 4.6 and NSX-T 3.x+.

If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform.

Important

You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation.

1.3.4. Network connectivity requirements

You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate.

Review the following details about the required network ports.

Table 1.17. Ports used for all-machine to all-machine communications
ProtocolPortDescription

ICMP

N/A

Network reachability tests

TCP

1936

Metrics

9000-9999

Host level services, including the node exporter on ports 9100-9101 and the Cluster Version Operator on port 9099.

10250-10259

The default ports that Kubernetes reserves

10256

openshift-sdn

UDP

4789

virtual extensible LAN (VXLAN)

6081

Geneve

9000-9999

Host level services, including the node exporter on ports 9100-9101.

500

IPsec IKE packets

4500

IPsec NAT-T packets

TCP/UDP

30000-32767

Kubernetes node port

ESP

N/A

IPsec Encapsulating Security Payload (ESP)

Table 1.18. Ports used for all-machine to control plane communications
ProtocolPortDescription

TCP

6443

Kubernetes API

Table 1.19. Ports used for control plane machine to control plane machine communications
ProtocolPortDescription

TCP

2379-2380

etcd server and peer ports

1.3.5. vCenter requirements

Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment.

Required vCenter account privileges

To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions.

If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges.

An additional role is required if the installation program is to create a vSphere virtual machine folder.

Example 1.5. Roles and privileges required for installation

vSphere object for roleWhen requiredRequired privileges

vSphere vCenter

Always

Cns.Searchable
InventoryService.Tagging.AttachTag
InventoryService.Tagging.CreateCategory
InventoryService.Tagging.CreateTag
InventoryService.Tagging.DeleteCategory
InventoryService.Tagging.DeleteTag
InventoryService.Tagging.EditCategory
InventoryService.Tagging.EditTag
Sessions.ValidateSession
StorageProfile.View

vSphere vCenter Cluster

Always

Host.Config.Storage
Resource.AssignVMToPool
VApp.AssignResourcePool
VApp.Import
VirtualMachine.Config.AddNewDisk

vSphere Datastore

Always

Datastore.AllocateSpace
Datastore.Browse
Datastore.FileManagement

vSphere Port Group

Always

Network.Assign

Virtual Machine Folder

Always

Resource.AssignVMToPool
VApp.Import
VirtualMachine.Config.AddExistingDisk
VirtualMachine.Config.AddNewDisk
VirtualMachine.Config.AddRemoveDevice
VirtualMachine.Config.AdvancedConfig
VirtualMachine.Config.Annotation
VirtualMachine.Config.CPUCount
VirtualMachine.Config.DiskExtend
VirtualMachine.Config.DiskLease
VirtualMachine.Config.EditDevice
VirtualMachine.Config.Memory
VirtualMachine.Config.RemoveDisk
VirtualMachine.Config.Rename
VirtualMachine.Config.ResetGuestInfo
VirtualMachine.Config.Resource
VirtualMachine.Config.Settings
VirtualMachine.Config.UpgradeVirtualHardware
VirtualMachine.Interact.GuestControl
VirtualMachine.Interact.PowerOff
VirtualMachine.Interact.PowerOn
VirtualMachine.Interact.Reset
VirtualMachine.Inventory.Create
VirtualMachine.Inventory.CreateFromExisting
VirtualMachine.Inventory.Delete
VirtualMachine.Provisioning.Clone

vSphere vCenter Datacenter

If the installation program creates the virtual machine folder

Resource.AssignVMToPool
VApp.Import
VirtualMachine.Config.AddExistingDisk
VirtualMachine.Config.AddNewDisk
VirtualMachine.Config.AddRemoveDevice
VirtualMachine.Config.AdvancedConfig
VirtualMachine.Config.Annotation
VirtualMachine.Config.CPUCount
VirtualMachine.Config.DiskExtend
VirtualMachine.Config.DiskLease
VirtualMachine.Config.EditDevice
VirtualMachine.Config.Memory
VirtualMachine.Config.RemoveDisk
VirtualMachine.Config.Rename
VirtualMachine.Config.ResetGuestInfo
VirtualMachine.Config.Resource
VirtualMachine.Config.Settings
VirtualMachine.Config.UpgradeVirtualHardware
VirtualMachine.Interact.GuestControl
VirtualMachine.Interact.PowerOff
VirtualMachine.Interact.PowerOn
VirtualMachine.Interact.Reset
VirtualMachine.Inventory.Create
VirtualMachine.Inventory.CreateFromExisting
VirtualMachine.Inventory.Delete
VirtualMachine.Provisioning.Clone
Folder.Create
Folder.Delete

Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder.

Example 1.6. Required permissions and propagation settings

vSphere objectFolder typePropagate to childrenPermissions required

vSphere vCenter

Always

False

Listed required privileges

vSphere vCenter Datacenter

Existing folder

False

ReadOnly permission

Installation program creates the folder

True

Listed required privileges

vSphere vCenter Cluster

Always

True

Listed required privileges

vSphere vCenter Datastore

Always

False

Listed required privileges

vSphere Switch

Always

False

ReadOnly permission

vSphere Port Group

Always

False

Listed required privileges

vSphere vCenter Virtual Machine Folder

Existing folder

True

Listed required privileges

For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation.

Using OpenShift Container Platform with vMotion
Important

OpenShift Container Platform generally supports compute-only vMotion. Using Storage vMotion can cause issues and is not supported.

If you are using vSphere volumes in your pods, migrating a VM across datastores either manually or through Storage vMotion causes invalid references within OpenShift Container Platform persistent volume (PV) objects. These references prevent affected pods from starting up and can result in data loss.

Similarly, OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs.

Cluster resources

When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance.

A standard OpenShift Container Platform installation creates the following vCenter resources:

  • 1 Folder
  • 1 Tag category
  • 1 Tag
  • Virtual machines:

    • 1 template
    • 1 temporary bootstrap node
    • 3 control plane nodes
    • 3 compute machines

Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster.

If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage.

Cluster limits

Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks.

Networking requirements

You must use DHCP for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines.

Note

Persistent IP addresses are not available before the installation begins. Allocate a DHCP range and, after installation, manually replace the allocation with the persistent IP addresses.

Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster:

Note

It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents.

Required IP Addresses

An installer-provisioned vSphere installation requires these static IP addresses:

  • The API address is used to access the cluster API.
  • The Ingress address is used for cluster ingress traffic.
  • The control plane node addresses are used when upgrading a cluster from version 4.5 to 4.6.

You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster.

DNS records

You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>..

Table 1.20. Required DNS records
ComponentRecordDescription

API VIP

api.<cluster_name>.<base_domain>.

This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster.

Ingress VIP

*.apps.<cluster_name>.<base_domain>.

A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster.

1.3.6. Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues.

Note

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the key is added to the core user’s ~/.ssh/authorized_keys list.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' \
        -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.

    Running this command generates an SSH key that does not require a password in the location that you specified.

    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. Start the ssh-agent process as a background task:

    $ eval "$(ssh-agent -s)"

    Example output

    Agent pid 31874

    Note

    If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  3. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

1.3.7. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

1.3.8. Adding vCenter root CA certificates to your system trust

Because the installation program requires access to your vCenter’s API, you must add your vCenter’s trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster.

Procedure

  1. From the vCenter home page, download the vCenter’s root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads.
  2. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure:

    certs
    ├── lin
    │   ├── 108f4d17.0
    │   ├── 108f4d17.r1
    │   ├── 7e757f6a.0
    │   ├── 8e4f8471.0
    │   └── 8e4f8471.r0
    ├── mac
    │   ├── 108f4d17.0
    │   ├── 108f4d17.r1
    │   ├── 7e757f6a.0
    │   ├── 8e4f8471.0
    │   └── 8e4f8471.r0
    └── win
        ├── 108f4d17.0.crt
        ├── 108f4d17.r1.crl
        ├── 7e757f6a.0.crt
        ├── 8e4f8471.0.crt
        └── 8e4f8471.r0.crl
    
    3 directories, 15 files
  3. Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command:

    # cp certs/lin/* /etc/pki/ca-trust/source/anchors
  4. Update your system trust. For example, on a Fedora operating system, run the following command:

    # update-ca-trust extract

1.3.9. Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on VMware vSphere.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> 1
      1
      For <installation_directory>, specify the directory name to store the files that the installation program creates.
      Important

      Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        Note

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Select vsphere as the platform to target.
      3. Specify the name of your vCenter instance.
      4. Specify the user name and password for the vCenter account that has the required permissions to create the cluster.

        The installation program connects to your vCenter instance.

      5. Select the datacenter in your vCenter instance to connect to.
      6. Select the default vCenter datastore to use.
      7. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool.
      8. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured.
      9. Enter the virtual IP address that you configured for control plane API access.
      10. Enter the virtual IP address that you configured for cluster ingress.
      11. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured.
      12. Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured.
      13. Paste the pull secret from the Red Hat OpenShift Cluster Manager.
  2. Modify the install-config.yaml file. You can find more information about the available parameters in the Installation configuration parameters section.
  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

1.3.9.1. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

Note

After installation, you cannot modify these parameters in the install-config.yaml file.

Important

The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.

1.3.9.1.1. Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 1.21. Required parameters
ParameterDescriptionValues

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installer may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters and hyphens (-), such as dev.

platform

The configuration for the specific platform upon which to perform the installation: aws, baremetal, azure, openstack, ovirt, vsphere. For additional information about platform.<platform> parameters, consult the following table for your specific platform.

Object

pullSecret

Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io.

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}
1.3.9.1.2. Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Table 1.22. Network parameters
ParameterDescriptionValues

networking

The configuration for the cluster network.

Object

Note

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The cluster network provider Container Network Interface (CNI) plug-in to install.

Either OpenShiftSDN or OVNKubernetes. The default value is OpenShiftSDN.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Note

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

1.3.9.1.3. Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 1.23. Optional parameters
ParameterDescriptionValues

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

compute

The configuration for the machines that comprise the compute nodes.

Array of machine-pool objects. For details, see the following "Machine-pool" table.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default).

String

compute.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

aws, azure, gcp, openstack, ovirt, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects. For details, see the following "Machine-pool" table.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default).

String

controlPlane.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

controlPlane.name

Required if you use controlPlane. The name of the machine pool.

master

controlPlane.platform

Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

aws, azure, gcp, openstack, ovirt, vsphere, or {}

controlPlane.replicas

The number of control plane machines to provision.

The only supported value is 3, which is the default value.

credentialsMode

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

Note

Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content.

Mint, Passthrough, Manual, or an empty string ("").

fips

Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

Important

The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

Note

If you are using Azure File storage, you cannot enable FIPS mode.

false or true

imageContentSources

Sources and repositories for the release-image content.

Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

imageContentSources.source

Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

String

imageContentSources.mirrors

Specify one or more repositories that may also contain the same images.

Array of strings

publish

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Internal or External. The default value is External.

Setting this field to Internal is not supported on non-cloud platforms.

Important

If the value of the field is set to Internal, the cluster will become non-functional. For more information, refer to BZ#1953035.

sshKey

The SSH key or keys to authenticate access your cluster machines.

Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

One or more keys. For example:

sshKey:
  <key1>
  <key2>
  <key3>
1.3.9.1.4. Additional VMware vSphere configuration parameters

Additional VMware vSphere configuration parameters are described in the following table:

Table 1.24. Additional VMware vSphere cluster parameters
ParameterDescriptionValues

platform.vsphere.vCenter

The fully-qualified hostname or IP address of the vCenter server.

String

platform.vsphere.username

The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere.

String

platform.vsphere.password

The password for the vCenter user name.

String

platform.vsphere.datacenter

The name of the datacenter to use in the vCenter instance.

String

platform.vsphere.defaultDatastore

The name of the default datastore to use for provisioning volumes.

String

platform.vsphere.folder

Optional. The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the datacenter virtual machine folder.

String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name>.

platform.vsphere.network

The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured.

String

platform.vsphere.cluster

The vCenter cluster to install the OpenShift Container Platform cluster in.

String

platform.vsphere.apiVIP

The virtual IP (VIP) address that you configured for control plane API access.

An IP address, for example 128.0.0.1.

platform.vsphere.ingressVIP

The virtual IP (VIP) address that you configured for cluster ingress.

An IP address, for example 128.0.0.1.

1.3.9.1.5. Optional VMware vSphere machine pool configuration parameters

Optional VMware vSphere machine pool configuration parameters are described in the following table:

Table 1.25. Optional VMware vSphere machine pool parameters
ParameterDescriptionValues

platform.vsphere.clusterOSImage

The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network.

An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova.

platform.vsphere.osDisk.diskSizeGB

The size of the disk in gigabytes.

Integer

platform.vsphere.cpus

The total number of virtual processor cores to assign a virtual machine.

Integer

platform.vsphere.coresPerSocket

The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus/platform.vsphere.coresPerSocket. The default value is 1

Integer

platform.vsphere.memoryMB

The size of a virtual machine’s memory in megabytes.

Integer

1.3.9.2. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster

You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.

apiVersion: v1
baseDomain: example.com 1
compute: 2
- hyperthreading: Enabled 3
  name: worker
  replicas: 3
  platform:
    vsphere: 4
      cpus: 2
      coresPerSocket: 2
      memoryMB: 8192
      osDisk:
        diskSizeGB: 120
controlPlane: 5
  hyperthreading: Enabled 6
  name: master
  replicas: 3
  platform:
    vsphere: 7
      cpus: 4
      coresPerSocket: 2
      memoryMB: 16384
      osDisk:
        diskSizeGB: 120
metadata:
  name: cluster 8
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
  vsphere:
    vcenter: your.vcenter.server
    username: username
    password: password
    datacenter: datacenter
    defaultDatastore: datastore
    folder: folder
    network: VM_Network
    cluster: vsphere_cluster_name 9
    apiVIP: api_vip
    ingressVIP: ingress_vip
fips: false
pullSecret: '{"auths": ...}'
sshKey: 'ssh-ed25519 AAAA...'
1
The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name.
2 5
The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, -, and the first line of the controlPlane section must not. Only one control plane pool is used.
3 6
Whether to enable or disable simultaneous multithreading, or hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading.

4 7
Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines.
8
The cluster name that you specified in your DNS records.
9
The vSphere cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool.

1.3.9.3. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Prerequisites

  • You have an existing install-config.yaml file.
  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    Note

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure

  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 1
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 2
      noProxy: example.com 3
    additionalTrustBundle: | 4
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    ...
    1
    A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2
    A proxy URL to use for creating HTTPS connections outside the cluster.
    3
    A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations. You must include vCenter’s IP address and the IP range that you use for its machines.
    4
    If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
    Note

    The installation program does not support the proxy readinessEndpoints field.

  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Note

Only the Proxy object named cluster is supported, and no additional proxies can be created.

1.3.10. Network configuration phases

When specifying a cluster configuration prior to installation, there are several phases in the installation procedures when you can modify the network configuration:

Phase 1

After entering the openshift-install create install-config command. In the install-config.yaml file, you can customize the following network-related fields:

  • networking.networkType
  • networking.clusterNetwork
  • networking.serviceNetwork
  • networking.machineNetwork

    For more information on these fields, refer to "Installation configuration parameters".

    Note

    Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

Phase 2
After entering the openshift-install create manifests command. If you must specify advanced network configuration, during this phase you can define a customized Cluster Network Operator manifest with only the fields you want to modify.

You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2.

1.3.11. Specifying advanced network configuration

You can use advanced configuration customization to integrate your cluster into your existing network environment by specifying additional configuration for your cluster network provider. You can specify advanced network configuration only before you install the cluster.

Important

Modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported.

Prerequisites

  • Create the install-config.yaml file and complete any modifications to it.

Procedure

  1. Change to the directory that contains the installation program and create the manifests:

    $ ./openshift-install create manifests --dir <installation_directory>

    where:

    <installation_directory>
    Specifies the name of the directory that contains the install-config.yaml file for your cluster.
  2. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory:

    $ cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml
    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
    EOF

    where:

    <installation_directory>
    Specifies the directory name that contains the manifests/ directory for your cluster.
  3. Open the cluster-network-03-config.yml file in an editor and specify the advanced network configuration for your cluster, such as in the following example:

    Specify a different VXLAN port for the OpenShift SDN network provider

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      defaultNetwork:
        openshiftSDNConfig:
          vxlanPort: 4800

  4. Save the cluster-network-03-config.yml file and quit the text editor.
  5. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster.

1.3.12. Cluster Network Operator configuration

The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster. The CR specifies the fields for the Network API in the operator.openshift.io API group.

The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed:

clusterNetwork
IP address pools from which pod IP addresses are allocated.
serviceNetwork
IP address pool for services.
defaultNetwork.type
Cluster network provider, such as OpenShift SDN or OVN-Kubernetes.

You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster.

1.3.12.1. Cluster Network Operator configuration object

The fields for the Cluster Network Operator (CNO) are described in the following table:

Table 1.26. Cluster Network Operator configuration object
FieldTypeDescription

metadata.name

string

The name of the CNO object. This name is always cluster.

spec.clusterNetwork

array

A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example:

spec:
  clusterNetwork:
  - cidr: 10.128.0.0/19
    hostPrefix: 23
  - cidr: 10.128.32.0/19
    hostPrefix: 23

This value is ready-only and specified in the install-config.yaml file.

spec.serviceNetwork

array

A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example:

spec:
  serviceNetwork:
  - 172.30.0.0/14

This value is ready-only and specified in the install-config.yaml file.

spec.defaultNetwork

object

Configures the Container Network Interface (CNI) cluster network provider for the cluster network.

spec.kubeProxyConfig

object

The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect.

defaultNetwork object configuration

The values for the defaultNetwork object are defined in the following table:

Table 1.27. defaultNetwork object
FieldTypeDescription

type

string

Either OpenShiftSDN or OVNKubernetes. The cluster network provider is selected during installation. This value cannot be changed after cluster installation.

Note

OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default.

openshiftSDNConfig

object

This object is only valid for the OpenShift SDN cluster network provider.

ovnKubernetesConfig

object

This object is only valid for the OVN-Kubernetes cluster network provider.

Configuration for the OpenShift SDN CNI cluster network provider

The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider.

Table 1.28. openshiftSDNConfig object
FieldTypeDescription

mode

string

Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy.

The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation.

mtu

integer

The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU.

If the auto-detected value is not what you expected it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.

If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001, and some have an MTU of 1500, you must set this value to 1450.

This value cannot be changed after cluster installation.

vxlanPort

integer

The port to use for all VXLAN packets. The default value is 4789. This value cannot be changed after cluster installation.

If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number.

On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999.

Example OpenShift SDN configuration

defaultNetwork:
  type: OpenShiftSDN
  openshiftSDNConfig:
    mode: NetworkPolicy
    mtu: 1450
    vxlanPort: 4789

Configuration for the OVN-Kubernetes CNI cluster network provider

The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider.

Table 1.29. ovnKubernetesConfig object
FieldTypeDescription

mtu

integer

The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU.

If the auto-detected value is not what you expected it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.

If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001, and some have an MTU of 1500, you must set this value to 1400.

This value cannot be changed after cluster installation.

genevePort

integer

The port to use for all Geneve packets. The default value is 6081. This value cannot be changed after cluster installation.

Example OVN-Kubernetes configuration

defaultNetwork:
  type: OVNKubernetes
  ovnKubernetesConfig:
    mtu: 1400
    genevePort: 6081

kubeProxyConfig object configuration

The values for the kubeProxyConfig object are defined in the following table:

Table 1.30. kubeProxyConfig object
FieldTypeDescription

iptablesSyncPeriod

string

The refresh period for iptables rules. The default value is 30s. Valid suffixes include s, m, and h and are described in the Go time package documentation.

Note

Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary.

proxyArguments.iptables-min-sync-period

array

The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s, m, and h and are described in the Go time package. The default value is:

kubeProxyConfig:
  proxyArguments:
    iptables-min-sync-period:
    - 0s

1.3.13. Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

Important

You can run the create cluster command of the installation program only once, during initial installation.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ 1
        --log-level=info 2
    1
    For <installation_directory>, specify the location of your customized ./install-config.yaml file.
    2
    To view different installation details, specify warn, debug, or error instead of info.

    When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.

...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
INFO Time elapsed: 36m22s

+

Note

The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds.

+

Important
  • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

+

Important

You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

1.3.14. Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.

Important

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.6. Download and install the new version of oc.

1.3.14.1. Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Linux Client entry and save the file.
  4. Unpack the archive:

    $ tar xvzf <file>
  5. Place the oc binary in a directory that is on your PATH.

    To check your PATH, execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.3.14.2. Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Windows Client entry and save the file.
  4. Unzip the archive with a ZIP program.
  5. Move the oc binary to a directory that is on your PATH.

    To check your PATH, open the command prompt and execute the following command:

    C:\> path

After you install the OpenShift CLI, it is available using the oc command:

C:\> oc <command>

1.3.14.3. Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 MacOSX Client entry and save the file.
  4. Unpack and unzip the archive.
  5. Move the oc binary to a directory on your PATH.

    To check your PATH, open a terminal and execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.3.15. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

1.3.16. Creating registry storage

After you install the cluster, you must create storage for the registry Operator.

1.3.16.1. Image registry removed during installation

On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed. This allows openshift-installer to complete installations on these platform types.

After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed.

Note

The Prometheus console provides an ImageRegistryRemoved alert, for example:

"Image Registry has been removed. ImageStreamTags, BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io."

1.3.16.2. Image registry storage configuration

The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available.

Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters.

Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades.

1.3.16.2.1. Configuring registry storage for VMware vSphere

As a cluster administrator, following installation you must configure your registry to use storage.

Prerequisites

  • Cluster administrator permissions.
  • A cluster on VMware vSphere.
  • Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage.

    Important

    OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required.

  • Must have "100Gi" capacity.
Important

Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended.

Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components.

Procedure

  1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource.

    Note

    When using shared storage, review your security settings to prevent outside access.

  2. Verify that you do not have a registry pod:

    $ oc get pod -n openshift-image-registry
    Note

    If the storage type is emptyDIR, the replica number cannot be greater than 1.

  3. Check the registry configuration:

    $ oc edit configs.imageregistry.operator.openshift.io

    Example output

    storage:
      pvc:
        claim: 1

    1
    Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC.
  4. Check the clusteroperator status:

    $ oc get clusteroperator image-registry
1.3.16.2.2. Configuring block registry storage for VMware vSphere

To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy.

Important

Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica.

Procedure

  1. To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica:

    $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}'
  2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode.

    1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object:

      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: image-registry-storage 1
        namespace: openshift-image-registry 2
      spec:
        accessModes:
        - ReadWriteOnce 3
        resources:
          requests:
            storage: 100Gi 4
      1
      A unique name that represents the PersistentVolumeClaim object.
      2
      The namespace for the PersistentVolumeClaim object, which is openshift-image-registry.
      3
      The access mode of the persistent volume claim. With ReadWriteOnce, the volume can be mounted with read and write permissions by a single node.
      4
      The size of the persistent volume claim.
    2. Create the PersistentVolumeClaim object from the file:

      $ oc create -f pvc.yaml -n openshift-image-registry
  3. Edit the registry configuration so that it references the correct PVC:

    $ oc edit config.imageregistry.operator.openshift.io -o yaml

    Example output

    storage:
      pvc:
        claim: 1

    1
    Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC.

For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere.

1.3.17. Backing up VMware vSphere volumes

OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information.

Procedure

To create a backup of persistent volumes:

  1. Stop the application that is using the persistent volume.
  2. Clone the persistent volume.
  3. Restart the application.
  4. Create a backup of the cloned volume.
  5. Delete the cloned volume.

1.3.18. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.6, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

1.3.19. Next steps

1.4. Installing a cluster on vSphere with user-provisioned infrastructure

In OpenShift Container Platform version 4.6, you can install a cluster on VMware vSphere infrastructure that you provision.

Important

The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods.

1.4.1. Prerequisites

  • Provision persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes.
  • Review details about the OpenShift Container Platform installation and update processes.
  • Completing the installation requires that you upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA on vSphere hosts. The machine from which you complete this process requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible.
  • If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed.
  • If you use a firewall, you must configure it to allow the sites that your cluster requires access to.

    Note

    Be sure to also review this site list if you are configuring a proxy.

1.4.2. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.6, you require access to the Internet to install your cluster.

You must have Internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry.

1.4.3. VMware vSphere infrastructure requirements

You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use.

Table 1.31. Minimum supported vSphere version for VMware components
ComponentMinimum supported versionsDescription

Hypervisor

vSphere 6.5 and later with HW version 13

This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list.

Storage with in-tree drivers

vSphere 6.5 and later

This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform.

Optional: Networking (NSX-T)

vSphere 6.5U3 or vSphere 6.7U2 and later

vSphere 6.5U3 or vSphere 6.7U2+ are required for OpenShift Container Platform. VMware’s NSX Container Plug-in (NCP) 3.0.2 is certified with OpenShift Container Platform 4.6 and NSX-T 3.x+.

If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform.

Important

You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation.

1.4.4. Machine requirements for a cluster with user-provisioned infrastructure

For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines.

1.4.4.1. Required machines

The smallest OpenShift Container Platform clusters require the following hosts:

  • One temporary bootstrap machine
  • Three control plane, or master, machines
  • At least two compute machines, which are also known as worker machines.
Note

The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster.

Important

To maintain high availability of your cluster, use separate physical hosts for these cluster machines.

The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 7.9.

Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits.

Important

All virtual machines must reside in the same datastore and in the same folder as the installer.

1.4.4.2. Network connectivity requirements

All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config files from the Machine Config Server. During the initial boot, the machines require either a DHCP server or that static IP addresses be set in order to establish a network connection to download their Ignition config files. Additionally, each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server. If a DHCP server provides NTP servers information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers.

1.4.4.3. Minimum resource requirements

Each cluster machine must meet the following minimum requirements:

MachineOperating SystemvCPU [1]Virtual RAMStorageIOPS [2]

Bootstrap

RHCOS

4

16 GB

100 GB

300

Control plane

RHCOS

4

16 GB

100 GB

300

Compute

RHCOS or RHEL 7.9

2

8 GB

100 GB

300

  1. One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
  2. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.

1.4.4.4. Certificate signing requests management

Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.

1.4.5. Creating the user-provisioned infrastructure

Before you deploy an OpenShift Container Platform cluster that uses user-provisioned infrastructure, you must create the underlying infrastructure.

Prerequisites

Procedure

  1. Configure DHCP or set static IP addresses on each node.
  2. Provision the required load balancers.
  3. Configure the ports for your machines.
  4. Configure DNS.
  5. Ensure network connectivity.

1.4.5.1. Networking requirements for user-provisioned infrastructure

All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config from the machine config server.

During the initial boot, the machines require either a DHCP server or that static IP addresses be set on each host in the cluster in order to establish a network connection, which allows them to download their Ignition config files.

It is recommended to use the DHCP server to manage the machines for the cluster long-term. Ensure that the DHCP server is configured to provide persistent IP addresses and host names to the cluster machines.

The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests.

You must configure the network connectivity between machines to allow cluster components to communicate. Each machine must be able to resolve the host names of all other machines in the cluster.

Table 1.32. All machines to all machines
ProtocolPortDescription

ICMP

N/A

Network reachability tests

TCP

1936

Metrics

9000-9999

Host level services, including the node exporter on ports 9100-9101 and the Cluster Version Operator on port 9099.

10250-10259

The default ports that Kubernetes reserves

10256

openshift-sdn

UDP

4789

VXLAN and Geneve

6081

VXLAN and Geneve

9000-9999

Host level services, including the node exporter on ports 9100-9101.

TCP/UDP

30000-32767

Kubernetes node port

Table 1.33. All machines to control plane
ProtocolPortDescription

TCP

6443

Kubernetes API

Table 1.34. Control plane machines to control plane machines
ProtocolPortDescription

TCP

2379-2380

etcd server and peer ports

Network topology requirements

The infrastructure that you provision for your cluster must meet the following network topology requirements.

Important

OpenShift Container Platform requires all nodes to have internet access to pull images for platform containers and provide telemetry data to Red Hat.

Load balancers

Before you install OpenShift Container Platform, you must provision two load balancers that meet the following requirements:

  1. API load balancer: Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions:

    • Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes.
    • A stateless load balancing algorithm. The options vary based on the load balancer implementation.
    Important

    Do not configure session persistence for an API load balancer.

    Configure the following ports on both the front and back of the load balancers:

    Table 1.35. API load balancer
    PortBack-end machines (pool members)InternalExternalDescription

    6443

    Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe.

    X

    X

    Kubernetes API server

    22623

    Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane.

    X

     

    Machine config server

    Note

    The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values.

  2. Application Ingress load balancer: Provides an Ingress point for application traffic flowing in from outside the cluster. Configure the following conditions:

    • Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the Ingress routes.
    • A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform.

    Configure the following ports on both the front and back of the load balancers:

    Table 1.36. Application Ingress load balancer
    PortBack-end machines (pool members)InternalExternalDescription

    443

    The machines that run the Ingress router pods, compute, or worker, by default.

    X

    X

    HTTPS traffic

    80

    The machines that run the Ingress router pods, compute, or worker, by default.

    X

    X

    HTTP traffic

Tip

If the true IP address of the client can be seen by the load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption.

Note

A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes.

Ethernet adaptor hardware address requirements

When provisioning VMs for the cluster, the ethernet interfaces configured for each VM must use a MAC address from the VMware Organizationally Unique Identifier (OUI) allocation ranges:

  • 00:05:69:00:00:00 to 00:05:69:FF:FF:FF
  • 00:0c:29:00:00:00 to 00:0c:29:FF:FF:FF
  • 00:1c:14:00:00:00 to 00:1c:14:FF:FF:FF
  • 00:50:56:00:00:00 to 00:50:56:FF:FF:FF

If a MAC address outside the VMware OUI is used, the cluster installation will not succeed.

NTP configuration

OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service.

If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers.

Additional resources

1.4.5.2. User-provisioned DNS requirements

DNS is used for name resolution and reverse name resolution. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the host name for all the nodes. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate.

The following DNS records are required for an OpenShift Container Platform cluster that uses user-provisioned infrastructure. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>..

Table 1.37. Required DNS records
ComponentRecordDescription

Kubernetes API

api.<cluster_name>.<base_domain>.

Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the load balancer for the control plane machines. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.

api-int.<cluster_name>.<base_domain>.

Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the load balancer for the control plane machines. These records must be resolvable from all the nodes within the cluster.

Important

The API server must be able to resolve the worker nodes by the host names that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods.

Routes

*.apps.<cluster_name>.<base_domain>.

Add a wildcard DNS A/AAAA or CNAME record that refers to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.

Bootstrap

bootstrap.<cluster_name>.<base_domain>.

Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster.

Master hosts

<master><n>.<cluster_name>.<base_domain>.

DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes (also known as the master nodes). These records must be resolvable by the nodes within the cluster.

Worker hosts

<worker><n>.<cluster_name>.<base_domain>.

Add DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster.

Tip

You can use the nslookup <hostname> command to verify name resolution. You can use the dig -x <ip_address> command to verify reverse name resolution for the PTR records.

The following example of a BIND zone file shows sample A records for name resolution. The purpose of the example is to show the records that are needed. The example is not meant to provide advice for choosing one name resolution service over another.

Example 1.7. Sample DNS zone database

$TTL 1W
@	IN	SOA	ns1.example.com.	root (
			2019070700	; serial
			3H		; refresh (3 hours)
			30M		; retry (30 minutes)
			2W		; expiry (2 weeks)
			1W )		; minimum (1 week)
	IN	NS	ns1.example.com.
	IN	MX 10	smtp.example.com.
;
;
ns1	IN	A	192.168.1.5
smtp	IN	A	192.168.1.5
;
helper	IN	A	192.168.1.5
helper.ocp4	IN	A	192.168.1.5
;
; The api identifies the IP of your load balancer.
api.ocp4		IN	A	192.168.1.5
api-int.ocp4		IN	A	192.168.1.5
;
; The wildcard also identifies the load balancer.
*.apps.ocp4		IN	A	192.168.1.5
;
; Create an entry for the bootstrap host.
bootstrap.ocp4	IN	A	192.168.1.96
;
; Create entries for the master hosts.
master0.ocp4		IN	A	192.168.1.97
master1.ocp4		IN	A	192.168.1.98
master2.ocp4		IN	A	192.168.1.99
;
; Create entries for the worker hosts.
worker0.ocp4		IN	A	192.168.1.11
worker1.ocp4		IN	A	192.168.1.7
;
;EOF

The following example BIND zone file shows sample PTR records for reverse name resolution.

Example 1.8. Sample DNS zone database for reverse records

$TTL 1W
@	IN	SOA	ns1.example.com.	root (
			2019070700	; serial
			3H		; refresh (3 hours)
			30M		; retry (30 minutes)
			2W		; expiry (2 weeks)
			1W )		; minimum (1 week)
	IN	NS	ns1.example.com.
;
; The syntax is "last octet" and the host must have an FQDN
; with a trailing dot.
97	IN	PTR	master0.ocp4.example.com.
98	IN	PTR	master1.ocp4.example.com.
99	IN	PTR	master2.ocp4.example.com.
;
96	IN	PTR	bootstrap.ocp4.example.com.
;
5	IN	PTR	api.ocp4.example.com.
5	IN	PTR	api-int.ocp4.example.com.
;
11	IN	PTR	worker0.ocp4.example.com.
7	IN	PTR	worker1.ocp4.example.com.
;
;EOF

1.4.6. Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues.

Note

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the key is added to the core user’s ~/.ssh/authorized_keys list.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' \
        -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.

    Running this command generates an SSH key that does not require a password in the location that you specified.

    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. Start the ssh-agent process as a background task:

    $ eval "$(ssh-agent -s)"

    Example output

    Agent pid 31874

    Note

    If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  3. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide this key to your cluster’s machines.

1.4.7. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

1.4.8. Manually creating the installation configuration file

For installations of OpenShift Container Platform that use user-provisioned infrastructure, you manually generate your installation configuration file.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the access token for your cluster.

Procedure

  1. Create an installation directory to store your required installation assets in:

    $ mkdir <installation_directory>
    Important

    You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

  2. Customize the following install-config.yaml file template and save it in the <installation_directory>.

    Note

    You must name this configuration file install-config.yaml.

  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the next step of the installation process. You must back it up now.

1.4.8.1. Sample install-config.yaml file for VMware vSphere

You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.

apiVersion: v1
baseDomain: example.com 1
compute:
- hyperthreading: Enabled 2 3
  name: worker
  replicas: 0 4
controlPlane:
  hyperthreading: Enabled 5 6
  name: master
  replicas: 3 7
metadata:
  name: test 8
platform:
  vsphere:
    vcenter: your.vcenter.server 9
    username: username 10
    password: password 11
    datacenter: datacenter 12
    defaultDatastore: datastore 13
    folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 14
fips: false 15
pullSecret: '{"auths": ...}' 16
sshKey: 'ssh-ed25519 AAAA...' 17
1
The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name.
2 5
The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, -, and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used.
3 6
Whether to enable or disable simultaneous multithreading, or hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading.

4
You must set the value of the replicas parameter to 0. This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform.
7
The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy.
8
The cluster name that you specified in your DNS records.
9
The fully-qualified hostname or IP address of the vCenter server.
10
The name of the user for accessing the server. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere.
11
The password associated with the vSphere user.
12
The vSphere datacenter.
13
The default vSphere datastore to use.
14
Optional: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name>. If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster, omit this parameter.
15
Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.
Important

The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

16
The pull secret that you obtained from OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
17
The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS).

1.4.8.2. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Prerequisites

  • You have an existing install-config.yaml file.
  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    Note

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure

  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 1
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 2
      noProxy: example.com 3
    additionalTrustBundle: | 4
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    ...
    1
    A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2
    A proxy URL to use for creating HTTPS connections outside the cluster.
    3
    A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations. You must include vCenter’s IP address and the IP range that you use for its machines.
    4
    If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
    Note

    The installation program does not support the proxy readinessEndpoints field.

  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Note

Only the Proxy object named cluster is supported, and no additional proxies can be created.

1.4.9. Creating the Kubernetes manifest and Ignition config files

Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines.

The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to create the cluster.

Important
  • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

Prerequisites

  • You obtained the OpenShift Container Platform installation program.
  • You created the install-config.yaml installation configuration file.

Procedure

  1. Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster:

    $ ./openshift-install create manifests --dir <installation_directory> 1
    1
    For <installation_directory>, specify the installation directory that contains the install-config.yaml file you created.
  2. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets:

    $ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml

    Because you create and manage these resources yourself, you do not have to initialize them.

    • You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment.
  3. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false. This setting prevents pods from being scheduled on the control plane machines:

    1. Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file.
    2. Locate the mastersSchedulable parameter and ensure that it is set to false.
    3. Save and exit the file.
  4. To create the Ignition configuration files, run the following command from the directory that contains the installation program:

    $ ./openshift-install create ignition-configs --dir <installation_directory> 1
    1
    For <installation_directory>, specify the same installation directory.

    The following files are generated in the directory:

    .
    ├── auth
    │   ├── kubeadmin-password
    │   └── kubeconfig
    ├── bootstrap.ign
    ├── master.ign
    ├── metadata.json
    └── worker.ign

1.4.10. Extracting the infrastructure name

The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware vSphere. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it.

Prerequisites

  • You obtained the OpenShift Container Platform installation program and the pull secret for your cluster.
  • You generated the Ignition config files for your cluster.
  • You installed the jq package.

Procedure

  • To extract and view the infrastructure name from the Ignition config file metadata, run the following command:

    $ jq -r .infraID <installation_directory>/metadata.json 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.

    Example output

    openshift-vw9j6 1

    1
    The output of this command is your cluster name and a random string.

1.4.11. Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere

Before you install a cluster that contains user-provisioned infrastructure on VMware vSphere, you must create RHCOS machines on vSphere hosts for it to use.

Prerequisites

  • You have obtained the Ignition config files for your cluster.
  • You have access to an HTTP server that you can access from your computer and that the machines that you create can access.
  • You have created a vSphere cluster.

Procedure

  1. Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign, that the installation program created to your HTTP server. Note the URL of this file.
  2. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign:

    {
      "ignition": {
        "config": {
          "merge": [
            {
              "source": "<bootstrap_ignition_config_url>", 1
              "verification": {}
            }
          ]
        },
        "timeouts": {},
        "version": "3.1.0"
      },
      "networkd": {},
      "passwd": {},
      "storage": {},
      "systemd": {}
    }
    1
    Specify the URL of the bootstrap Ignition config file that you hosted.

    When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file.

  3. Locate the following Ignition config files that the installation program created:

    • <installation_directory>/master.ign
    • <installation_directory>/worker.ign
    • <installation_directory>/merge-bootstrap.ign
  4. Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM.

    For example, if you use a Linux operating system, you can use the base64 command to encode the files.

    $ base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64
    $ base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64
    $ base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64
    Important

    If you plan to add more compute machines to your cluster after you finish installation, do not delete these files.

  5. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page.

    Important

    The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available.

    The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova.

  6. In the vSphere Client, create a folder in your datacenter to store your VMs.

    1. Click the VMs and Templates view.
    2. Right-click the name of your datacenter.
    3. Click New Folder New VM and Template Folder.
    4. In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration.
  7. In the vSphere Client, create a template for the OVA image and then clone the template as needed.

    Note

    In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs.

    1. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template.
    2. On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded.
    3. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS. Click the name of your vSphere cluster and select the folder you created in the previous step.
    4. On the Select a compute resource tab, click the name of your vSphere cluster.
    5. On the Select storage tab, configure the storage options for your VM.

      • Select Thin Provision or Thick Provision, based on your storage preferences.
      • Select the datastore that you specified in your install-config.yaml file.
    6. On the Select network tab, specify the network that you configured for the cluster, if available.
    7. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further.

      Important

      Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that machine sets can apply configurations to.

  8. After the template deploys, deploy a VM for a machine in the cluster.

    1. Right-click the template name and click Clone Clone to Virtual Machine.
    2. On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1.
    3. On the Select a name and folder tab, select the name of the folder that you created for the cluster.
    4. On the Select a compute resource tab, select the name of a host in your datacenter.

      For a bootstrap machine, specify the URL of the bootstrap Ignition config file that you hosted.

    5. Optional: On the Select storage tab, customize the storage options.
    6. On the Select clone options, select Customize this virtual machine’s hardware.
    7. On the Customize hardware tab, click VM Options Advanced.

      • Optional: Override default DHCP networking in vSphere. To enable static IP networking:

        1. Set your static IP configuration:

          $ export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]"

          Example command

          $ export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8"

        2. Set the guestinfo.afterburn.initrd.network-kargs property before booting a VM from an OVA in vSphere:

          $ govc vm.change -vm "<vm_name>" -e "guestinfo.afterburn.initrd.network-kargs=${IPCFG}"
      • Optional: In the event of cluster performance issues, from the Latency Sensitivity list, select High. Ensure that your VM’s CPU and memory reservation have the following values:

        • Memory reservation value must be equal to its configured memory size.
        • CPU reservation value must be at least the number of low latency virtual CPUs multiplied by the measured physical CPU speed.
      • Click Edit Configuration, and on the Configuration Parameters window, click Add Configuration Params. Define the following parameter names and values:

        • guestinfo.ignition.config.data: Locate the base-64 encoded files that you created previously in this procedure, and paste the contents of the base64-encoded Ignition config file for this machine type.
        • guestinfo.ignition.config.data.encoding: Specify base64.
        • disk.EnableUUID: Specify TRUE.
    8. In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type.
    9. Complete the configuration and power on the VM.
  9. Create the rest of the machines for your cluster by following the preceding steps for each machine.

    Important

    You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster.

1.4.12. Creating more Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere

You can create more compute machines for your cluster that uses user-provisioned infrastructure on VMware vSphere.

Prerequisites

  • Obtain the base64-encoded Ignition file for your compute machines.
  • You have access to the vSphere template that you created for your cluster.

Procedure

  1. After the template deploys, deploy a VM for a machine in the cluster.

    1. Right-click the template’s name and click Clone Clone to Virtual Machine.
    2. On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1.
    3. On the Select a name and folder tab, select the name of the folder that you created for the cluster.
    4. On the Select a compute resource tab, select the name of a host in your datacenter.
    5. Optional: On the Select storage tab, customize the storage options.
    6. On the Select clone options, select Customize this virtual machine’s hardware.
    7. On the Customize hardware tab, click VM Options Advanced.

      • From the Latency Sensitivity list, select High.
      • Click Edit Configuration, and on the Configuration Parameters window, click Add Configuration Params. Define the following parameter names and values:

        • guestinfo.ignition.config.data: Paste the contents of the base64-encoded compute Ignition config file for this machine type.
        • guestinfo.ignition.config.data.encoding: Specify base64.
        • disk.EnableUUID: Specify TRUE.
    8. In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Also, make sure to select the correct network under Add network adapter if there are multiple networks available.
    9. Complete the configuration and power on the VM.
  2. Continue to create more compute machines for your cluster.

1.4.13. Disk partitioning

In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions.

However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node:

  • Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var, such as /var/lib/etcd, a separate partition, but not both.

    Important

    Kubernetes supports only two filesystem partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them.

  • Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your previous operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions.

Creating a separate /var partition

In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow.

OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var. For example:

  • /var/lib/containers: Holds container-related content that can grow as more images and containers are added to a system.
  • /var/lib/etcd: Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage.
  • /var: Holds data that you might want to keep separate for purposes such as auditing.

Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems.

Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation.

Procedure

  1. Create a directory to hold the OpenShift Container Platform installation files:

    $ mkdir $HOME/clusterconfig
  2. Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted:

    $ openshift-install create manifests --dir $HOME/clusterconfig
    ? SSH Public Key ...
    $ ls $HOME/clusterconfig/openshift/
    99_kubeadmin-password-secret.yaml
    99_openshift-cluster-api_master-machines-0.yaml
    99_openshift-cluster-api_master-machines-1.yaml
    99_openshift-cluster-api_master-machines-2.yaml
    ...
  3. Create a MachineConfig object and add it to a file in the openshift directory. For example, name the file 98-var-partition.yaml, change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
      name: 98-var-partition
    spec:
      config:
        ignition:
          version: 3.1.0
        storage:
          disks:
          - device: /dev/<device_name> 1
            partitions:
            - label: var
              startMiB: <partition_start_offset> 2
              sizeMiB: <partition_size> 3
          filesystems:
            - device: /dev/disk/by-partlabel/var
              path: /var
              format: xfs
        systemd:
          units:
            - name: var.mount 4
              enabled: true
              contents: |
                [Unit]
                Before=local-fs.target
                [Mount]
                What=/dev/disk/by-partlabel/var
                Where=/var
                Options=defaults,prjquota 5
                [Install]
                WantedBy=local-fs.target
    1
    The storage device name of the disk that you want to partition.
    2
    When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition.
    3
    The size of the data partition in mebibytes.
    4
    The name of the mount unit must match the directory specified in the Where= directive. For example, for a filesystem mounted on /var/lib/containers, the unit must be named var-lib-containers.mount.
    5
    The prjquota mount option must be enabled for filesystems used for container storage.
    Note

    When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name.

  4. Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories:

    $ openshift-install create ignition-configs --dir $HOME/clusterconfig
    $ ls $HOME/clusterconfig/
    auth  bootstrap.ign  master.ign  metadata.json  worker.ign

Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems.

1.4.14. Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.

Important

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.6. Download and install the new version of oc.

1.4.14.1. Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Linux Client entry and save the file.
  4. Unpack the archive:

    $ tar xvzf <file>
  5. Place the oc binary in a directory that is on your PATH.

    To check your PATH, execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.4.14.2. Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Windows Client entry and save the file.
  4. Unzip the archive with a ZIP program.
  5. Move the oc binary to a directory that is on your PATH.

    To check your PATH, open the command prompt and execute the following command:

    C:\> path

After you install the OpenShift CLI, it is available using the oc command:

C:\> oc <command>

1.4.14.3. Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 MacOSX Client entry and save the file.
  4. Unpack and unzip the archive.
  5. Move the oc binary to a directory on your PATH.

    To check your PATH, open a terminal and execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.4.15. Creating the cluster

To create the OpenShift Container Platform cluster, you wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program.

Prerequisites

  • Create the required infrastructure for the cluster.
  • You obtained the installation program and generated the Ignition config files for your cluster.
  • You used the Ignition config files to create RHCOS machines for your cluster.
  • Your machines have direct Internet access or have an HTTP or HTTPS proxy available.

Procedure

  1. Monitor the bootstrap process:

    $ ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1
        --log-level=info 2
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
    2
    To view different installation details, specify warn, debug, or error instead of info.

    Example output

    INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443...
    INFO API v1.19.0 up
    INFO Waiting up to 30m0s for bootstrapping to complete...
    INFO It is now safe to remove the bootstrap resources

    The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines.

  2. After bootstrap process is complete, remove the bootstrap machine from the load balancer.

    Important

    You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the machine itself.

1.4.16. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

1.4.17. Approving the certificate signing requests for your machines

When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.

Prerequisites

  • You added machines to your cluster.

Procedure

  1. Confirm that the cluster recognizes the machines:

    $ oc get nodes

    Example output

    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  63m  v1.19.0
    master-1  Ready     master  63m  v1.19.0
    master-2  Ready     master  64m  v1.19.0

    The output lists all of the machines that you created.

    Note

    The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.

  2. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster:

    $ oc get csr

    Example output

    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-8b2br   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    csr-8vnps   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    ...

    In this example, two machines are joining the cluster. You might see more approved CSRs in the list.

  3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines:

    Note

    Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters.

    Note

    For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec, oc rsh, and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node.

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> 1
      1
      <csr_name> is the name of a CSR from the list of current CSRs.
    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
      Note

      Some Operators might not become available until some CSRs are approved.

  4. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:

    $ oc get csr

    Example output

    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-bfd72   5m26s   system:node:ip-10-0-50-126.us-east-2.compute.internal                       Pending
    csr-c57lv   5m26s   system:node:ip-10-0-95-157.us-east-2.compute.internal                       Pending
    ...

  5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines:

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> 1
      1
      <csr_name> is the name of a CSR from the list of current CSRs.
    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
  6. After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command:

    $ oc get nodes

    Example output

    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  73m  v1.20.0
    master-1  Ready     master  73m  v1.20.0
    master-2  Ready     master  74m  v1.20.0
    worker-0  Ready     worker  11m  v1.20.0
    worker-1  Ready     worker  11m  v1.20.0

    Note

    It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status.

Additional information

1.4.18. Initial Operator configuration

After the control plane initializes, you must immediately configure some Operators so that they all become available.

Prerequisites

  • Your control plane has initialized.

Procedure

  1. Watch the cluster components come online:

    $ watch -n5 oc get clusteroperators

    Example output

    NAME                                       VERSION AVAILABLE   PROGRESSING   DEGRADED   SINCE
    authentication                             4.6.0   True        False         False      3h56m
    cloud-credential                           4.6.0   True        False         False      29h
    cluster-autoscaler                         4.6.0   True        False         False      29h
    config-operator                            4.6.0   True        False         False      6h39m
    console                                    4.6.0   True        False         False      3h59m
    csi-snapshot-controller                    4.6.0   True        False         False      4h12m
    dns                                        4.6.0   True        False         False      4h15m
    etcd                                       4.6.0   True        False         False      29h
    image-registry                             4.6.0   True        False         False      3h59m
    ingress                                    4.6.0   True        False         False      4h30m
    insights                                   4.6.0   True        False         False      29h
    kube-apiserver                             4.6.0   True        False         False      29h
    kube-controller-manager                    4.6.0   True        False         False      29h
    kube-scheduler                             4.6.0   True        False         False      29h
    kube-storage-version-migrator              4.6.0   True        False         False      4h2m
    machine-api                                4.6.0   True        False         False      29h
    machine-approver                           4.6.0   True        False         False      6h34m
    machine-config                             4.6.0   True        False         False      3h56m
    marketplace                                4.6.0   True        False         False      4h2m
    monitoring                                 4.6.0   True        False         False      6h31m
    network                                    4.6.0   True        False         False      29h
    node-tuning                                4.6.0   True        False         False      4h30m
    openshift-apiserver                        4.6.0   True        False         False      3h56m
    openshift-controller-manager               4.6.0   True        False         False      4h36m
    openshift-samples                          4.6.0   True        False         False      4h30m
    operator-lifecycle-manager                 4.6.0   True        False         False      29h
    operator-lifecycle-manager-catalog         4.6.0   True        False         False      29h
    operator-lifecycle-manager-packageserver   4.6.0   True        False         False      3h59m
    service-ca                                 4.6.0   True        False         False      29h
    storage                                    4.6.0   True        False         False      4h30m

  2. Configure the Operators that are not available.

1.4.18.1. Image registry removed during installation

On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed. This allows openshift-installer to complete installations on these platform types.

After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed.

Note

The Prometheus console provides an ImageRegistryRemoved alert, for example:

"Image Registry has been removed. ImageStreamTags, BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io."

1.4.18.2. Image registry storage configuration

The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available.

Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters.

Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades.

1.4.18.2.1. Configuring registry storage for VMware vSphere

As a cluster administrator, following installation you must configure your registry to use storage.

Prerequisites

  • Cluster administrator permissions.
  • A cluster on VMware vSphere.
  • Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage.

    Important

    OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required.

  • Must have "100Gi" capacity.
Important

Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended.

Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components.

Procedure

  1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource.

    Note

    When using shared storage, review your security settings to prevent outside access.

  2. Verify that you do not have a registry pod:

    $ oc get pod -n openshift-image-registry
    Note

    If the storage type is emptyDIR, the replica number cannot be greater than 1.

  3. Check the registry configuration:

    $ oc edit configs.imageregistry.operator.openshift.io

    Example output

    storage:
      pvc:
        claim: 1

    1
    Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC.
  4. Check the clusteroperator status:

    $ oc get clusteroperator image-registry
1.4.18.2.2. Configuring storage for the image registry in non-production clusters

You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry.

Procedure

  • To set the image registry storage to an empty directory:

    $ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'
    Warning

    Configure this option for only non-production clusters.

    If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error:

    Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found

    Wait a few minutes and run the command again.

1.4.18.2.3. Configuring block registry storage for VMware vSphere

To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy.

Important

Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica.

Procedure

  1. To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica:

    $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}'
  2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode.

    1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object:

      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: image-registry-storage 1
        namespace: openshift-image-registry 2
      spec:
        accessModes:
        - ReadWriteOnce 3
        resources:
          requests:
            storage: 100Gi 4
      1
      A unique name that represents the PersistentVolumeClaim object.
      2
      The namespace for the PersistentVolumeClaim object, which is openshift-image-registry.
      3
      The access mode of the persistent volume claim. With ReadWriteOnce, the volume can be mounted with read and write permissions by a single node.
      4
      The size of the persistent volume claim.
    2. Create the PersistentVolumeClaim object from the file:

      $ oc create -f pvc.yaml -n openshift-image-registry
  3. Edit the registry configuration so that it references the correct PVC:

    $ oc edit config.imageregistry.operator.openshift.io -o yaml

    Example output

    storage:
      pvc:
        claim: 1

    1
    Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC.

For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere.

1.4.19. Completing installation on user-provisioned infrastructure

After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide.

Prerequisites

  • Your control plane has initialized.
  • You have completed the initial Operator configuration.

Procedure

  1. Confirm that all the cluster components are online with the following command:

    $ watch -n5 oc get clusteroperators

    Example output

    NAME                                       VERSION AVAILABLE   PROGRESSING   DEGRADED   SINCE
    authentication                             4.6.0   True        False         False      3h56m
    cloud-credential                           4.6.0   True        False         False      29h
    cluster-autoscaler                         4.6.0   True        False         False      29h
    config-operator                            4.6.0   True        False         False      6h39m
    console                                    4.6.0   True        False         False      3h59m
    csi-snapshot-controller                    4.6.0   True        False         False      4h12m
    dns                                        4.6.0   True        False         False      4h15m
    etcd                                       4.6.0   True        False         False      29h
    image-registry                             4.6.0   True        False         False      3h59m
    ingress                                    4.6.0   True        False         False      4h30m
    insights                                   4.6.0   True        False         False      29h
    kube-apiserver                             4.6.0   True        False         False      29h
    kube-controller-manager                    4.6.0   True        False         False      29h
    kube-scheduler                             4.6.0   True        False         False      29h
    kube-storage-version-migrator              4.6.0   True        False         False      4h2m
    machine-api                                4.6.0   True        False         False      29h
    machine-approver                           4.6.0   True        False         False      6h34m
    machine-config                             4.6.0   True        False         False      3h56m
    marketplace                                4.6.0   True        False         False      4h2m
    monitoring                                 4.6.0   True        False         False      6h31m
    network                                    4.6.0   True        False         False      29h
    node-tuning                                4.6.0   True        False         False      4h30m
    openshift-apiserver                        4.6.0   True        False         False      3h56m
    openshift-controller-manager               4.6.0   True        False         False      4h36m
    openshift-samples                          4.6.0   True        False         False      4h30m
    operator-lifecycle-manager                 4.6.0   True        False         False      29h
    operator-lifecycle-manager-catalog         4.6.0   True        False         False      29h
    operator-lifecycle-manager-packageserver   4.6.0   True        False         False      3h59m
    service-ca                                 4.6.0   True        False         False      29h
    storage                                    4.6.0   True        False         False      4h30m

    Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials:

    $ ./openshift-install --dir <installation_directory> wait-for install-complete 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.

    Example output

    INFO Waiting up to 30m0s for the cluster to initialize...

    The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server.

    Important
    • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
    • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
  2. Confirm that the Kubernetes API server is communicating with the pods.

    1. To view a list of all pods, use the following command:

      $ oc get pods --all-namespaces

      Example output

      NAMESPACE                         NAME                                            READY   STATUS      RESTARTS   AGE
      openshift-apiserver-operator      openshift-apiserver-operator-85cb746d55-zqhs8   1/1     Running     1          9m
      openshift-apiserver               apiserver-67b9g                                 1/1     Running     0          3m
      openshift-apiserver               apiserver-ljcmx                                 1/1     Running     0          1m
      openshift-apiserver               apiserver-z25h4                                 1/1     Running     0          2m
      openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8        1/1     Running     0          5m
      ...

    2. View the logs for a pod that is listed in the output of the previous command by using the following command:

      $ oc logs <pod_name> -n <namespace> 1
      1
      Specify the pod name and namespace, as shown in the output of the previous command.

      If the pod logs display, the Kubernetes API server can communicate with the cluster machines.

You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere.

1.4.20. Backing up VMware vSphere volumes

OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information.

Procedure

To create a backup of persistent volumes:

  1. Stop the application that is using the persistent volume.
  2. Clone the persistent volume.
  3. Restart the application.
  4. Create a backup of the cloned volume.
  5. Delete the cloned volume.

1.4.21. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.6, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

1.4.22. Next steps

1.5. Installing a cluster on vSphere with network customizations

In OpenShift Container Platform version 4.6, you can install a cluster on VMware vSphere infrastructure that you provision with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations.

You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster.

Important

The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods.

1.5.1. Prerequisites

  • Review details about the OpenShift Container Platform installation and update processes.
  • Completing the installation requires that you upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA on vSphere hosts. The machine from which you complete this process requires access to port 443 on the vCenter and ESXi hosts. Verify that port 443 is accessible.
  • If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed.
  • If you use a firewall, you must configure it to access Red Hat Insights.

1.5.2. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.6, you require access to the Internet to install your cluster.

You must have Internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry.

1.5.3. VMware vSphere infrastructure requirements

You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use.

Table 1.38. Minimum supported vSphere version for VMware components
ComponentMinimum supported versionsDescription

Hypervisor

vSphere 6.5 and later with HW version 13

This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list.

Storage with in-tree drivers

vSphere 6.5 and later

This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform.

Optional: Networking (NSX-T)

vSphere 6.5U3 or vSphere 6.7U2 and later

vSphere 6.5U3 or vSphere 6.7U2+ are required for OpenShift Container Platform. VMware’s NSX Container Plug-in (NCP) 3.0.2 is certified with OpenShift Container Platform 4.6 and NSX-T 3.x+.

If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform.

Important

You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation.

1.5.4. Machine requirements for a cluster with user-provisioned infrastructure

For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines.

1.5.4.1. Required machines

The smallest OpenShift Container Platform clusters require the following hosts:

  • One temporary bootstrap machine
  • Three control plane, or master, machines
  • At least two compute machines, which are also known as worker machines.
Note

The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster.

Important

To maintain high availability of your cluster, use separate physical hosts for these cluster machines.

The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 7.9.

Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits.

Important

All virtual machines must reside in the same datastore and in the same folder as the installer.

1.5.4.2. Network connectivity requirements

All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config files from the Machine Config Server. During the initial boot, the machines require either a DHCP server or that static IP addresses be set in order to establish a network connection to download their Ignition config files. Additionally, each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server. If a DHCP server provides NTP servers information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers.

1.5.4.3. Minimum resource requirements

Each cluster machine must meet the following minimum requirements:

MachineOperating SystemvCPU [1]Virtual RAMStorageIOPS [2]

Bootstrap

RHCOS

4

16 GB

100 GB

300

Control plane

RHCOS

4

16 GB

100 GB

300

Compute

RHCOS or RHEL 7.9

2

8 GB

100 GB

300

  1. One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
  2. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.

1.5.4.4. Certificate signing requests management

Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.

1.5.5. Creating the user-provisioned infrastructure

Before you deploy an OpenShift Container Platform cluster that uses user-provisioned infrastructure, you must create the underlying infrastructure.

Prerequisites

Procedure

  1. Configure DHCP or set static IP addresses on each node.
  2. Provision the required load balancers.
  3. Configure the ports for your machines.
  4. Configure DNS.
  5. Ensure network connectivity.

1.5.5.1. Networking requirements for user-provisioned infrastructure

All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config from the machine config server.

During the initial boot, the machines require either a DHCP server or that static IP addresses be set on each host in the cluster in order to establish a network connection, which allows them to download their Ignition config files.

It is recommended to use the DHCP server to manage the machines for the cluster long-term. Ensure that the DHCP server is configured to provide persistent IP addresses and host names to the cluster machines.

The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests.

You must configure the network connectivity between machines to allow cluster components to communicate. Each machine must be able to resolve the host names of all other machines in the cluster.

Table 1.39. All machines to all machines
ProtocolPortDescription

ICMP

N/A

Network reachability tests

TCP

1936

Metrics

9000-9999

Host level services, including the node exporter on ports 9100-9101 and the Cluster Version Operator on port 9099.

10250-10259

The default ports that Kubernetes reserves

10256

openshift-sdn

UDP

4789

VXLAN and Geneve

6081

VXLAN and Geneve

9000-9999

Host level services, including the node exporter on ports 9100-9101.

TCP/UDP

30000-32767

Kubernetes node port

Table 1.40. All machines to control plane
ProtocolPortDescription

TCP

6443

Kubernetes API

Table 1.41. Control plane machines to control plane machines
ProtocolPortDescription

TCP

2379-2380

etcd server and peer ports

Network topology requirements

The infrastructure that you provision for your cluster must meet the following network topology requirements.

Important

OpenShift Container Platform requires all nodes to have internet access to pull images for platform containers and provide telemetry data to Red Hat.

Load balancers

Before you install OpenShift Container Platform, you must provision two load balancers that meet the following requirements:

  1. API load balancer: Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions:

    • Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes.
    • A stateless load balancing algorithm. The options vary based on the load balancer implementation.
    Important

    Do not configure session persistence for an API load balancer.

    Configure the following ports on both the front and back of the load balancers:

    Table 1.42. API load balancer
    PortBack-end machines (pool members)InternalExternalDescription

    6443

    Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe.

    X

    X

    Kubernetes API server

    22623

    Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane.

    X

     

    Machine config server

    Note

    The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values.

  2. Application Ingress load balancer: Provides an Ingress point for application traffic flowing in from outside the cluster. Configure the following conditions:

    • Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the Ingress routes.
    • A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform.

    Configure the following ports on both the front and back of the load balancers:

    Table 1.43. Application Ingress load balancer
    PortBack-end machines (pool members)InternalExternalDescription

    443

    The machines that run the Ingress router pods, compute, or worker, by default.

    X

    X

    HTTPS traffic

    80

    The machines that run the Ingress router pods, compute, or worker, by default.

    X

    X

    HTTP traffic

Tip

If the true IP address of the client can be seen by the load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption.

Note

A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes.

Ethernet adaptor hardware address requirements

When provisioning VMs for the cluster, the ethernet interfaces configured for each VM must use a MAC address from the VMware Organizationally Unique Identifier (OUI) allocation ranges:

  • 00:05:69:00:00:00 to 00:05:69:FF:FF:FF
  • 00:0c:29:00:00:00 to 00:0c:29:FF:FF:FF
  • 00:1c:14:00:00:00 to 00:1c:14:FF:FF:FF
  • 00:50:56:00:00:00 to 00:50:56:FF:FF:FF

If a MAC address outside the VMware OUI is used, the cluster installation will not succeed.

NTP configuration

OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service.

If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers.

Additional resources

1.5.5.2. User-provisioned DNS requirements

DNS is used for name resolution and reverse name resolution. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the host name for all the nodes. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate.

The following DNS records are required for an OpenShift Container Platform cluster that uses user-provisioned infrastructure. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>..

Table 1.44. Required DNS records
ComponentRecordDescription

Kubernetes API

api.<cluster_name>.<base_domain>.

Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the load balancer for the control plane machines. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.

api-int.<cluster_name>.<base_domain>.

Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the load balancer for the control plane machines. These records must be resolvable from all the nodes within the cluster.

Important

The API server must be able to resolve the worker nodes by the host names that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods.

Routes

*.apps.<cluster_name>.<base_domain>.

Add a wildcard DNS A/AAAA or CNAME record that refers to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.

Bootstrap

bootstrap.<cluster_name>.<base_domain>.

Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster.

Master hosts

<master><n>.<cluster_name>.<base_domain>.

DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes (also known as the master nodes). These records must be resolvable by the nodes within the cluster.

Worker hosts

<worker><n>.<cluster_name>.<base_domain>.

Add DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster.

Tip

You can use the nslookup <hostname> command to verify name resolution. You can use the dig -x <ip_address> command to verify reverse name resolution for the PTR records.

The following example of a BIND zone file shows sample A records for name resolution. The purpose of the example is to show the records that are needed. The example is not meant to provide advice for choosing one name resolution service over another.

Example 1.9. Sample DNS zone database

$TTL 1W
@	IN	SOA	ns1.example.com.	root (
			2019070700	; serial
			3H		; refresh (3 hours)
			30M		; retry (30 minutes)
			2W		; expiry (2 weeks)
			1W )		; minimum (1 week)
	IN	NS	ns1.example.com.
	IN	MX 10	smtp.example.com.
;
;
ns1	IN	A	192.168.1.5
smtp	IN	A	192.168.1.5
;
helper	IN	A	192.168.1.5
helper.ocp4	IN	A	192.168.1.5
;
; The api identifies the IP of your load balancer.
api.ocp4		IN	A	192.168.1.5
api-int.ocp4		IN	A	192.168.1.5
;
; The wildcard also identifies the load balancer.
*.apps.ocp4		IN	A	192.168.1.5
;
; Create an entry for the bootstrap host.
bootstrap.ocp4	IN	A	192.168.1.96
;
; Create entries for the master hosts.
master0.ocp4		IN	A	192.168.1.97
master1.ocp4		IN	A	192.168.1.98
master2.ocp4		IN	A	192.168.1.99
;
; Create entries for the worker hosts.
worker0.ocp4		IN	A	192.168.1.11
worker1.ocp4		IN	A	192.168.1.7
;
;EOF

The following example BIND zone file shows sample PTR records for reverse name resolution.

Example 1.10. Sample DNS zone database for reverse records

$TTL 1W
@	IN	SOA	ns1.example.com.	root (
			2019070700	; serial
			3H		; refresh (3 hours)
			30M		; retry (30 minutes)
			2W		; expiry (2 weeks)
			1W )		; minimum (1 week)
	IN	NS	ns1.example.com.
;
; The syntax is "last octet" and the host must have an FQDN
; with a trailing dot.
97	IN	PTR	master0.ocp4.example.com.
98	IN	PTR	master1.ocp4.example.com.
99	IN	PTR	master2.ocp4.example.com.
;
96	IN	PTR	bootstrap.ocp4.example.com.
;
5	IN	PTR	api.ocp4.example.com.
5	IN	PTR	api-int.ocp4.example.com.
;
11	IN	PTR	worker0.ocp4.example.com.
7	IN	PTR	worker1.ocp4.example.com.
;
;EOF

1.5.6. Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues.

Note

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the key is added to the core user’s ~/.ssh/authorized_keys list.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' \
        -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.

    Running this command generates an SSH key that does not require a password in the location that you specified.

    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. Start the ssh-agent process as a background task:

    $ eval "$(ssh-agent -s)"

    Example output

    Agent pid 31874

    Note

    If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  3. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

1.5.7. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

1.5.8. Manually creating the installation configuration file

For installations of OpenShift Container Platform that use user-provisioned infrastructure, you manually generate your installation configuration file.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the access token for your cluster.

Procedure

  1. Create an installation directory to store your required installation assets in:

    $ mkdir <installation_directory>
    Important

    You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

  2. Customize the following install-config.yaml file template and save it in the <installation_directory>.

    Note

    You must name this configuration file install-config.yaml.

  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the next step of the installation process. You must back it up now.

1.5.8.1. Sample install-config.yaml file for VMware vSphere

You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.

apiVersion: v1
baseDomain: example.com 1
compute:
- hyperthreading: Enabled 2 3
  name: worker
  replicas: 0 4
controlPlane:
  hyperthreading: Enabled 5 6
  name: master
  replicas: 3 7
metadata:
  name: test 8
platform:
  vsphere:
    vcenter: your.vcenter.server 9
    username: username 10
    password: password 11
    datacenter: datacenter 12
    defaultDatastore: datastore 13
    folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 14
fips: false 15
pullSecret: '{"auths": ...}' 16
sshKey: 'ssh-ed25519 AAAA...' 17
1
The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name.
2 5
The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, -, and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used.
3 6
Whether to enable or disable simultaneous multithreading, or hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading.

4
You must set the value of the replicas parameter to 0. This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform.
7
The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy.
8
The cluster name that you specified in your DNS records.
9
The fully-qualified hostname or IP address of the vCenter server.
10
The name of the user for accessing the server. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere.
11
The password associated with the vSphere user.
12
The vSphere datacenter.
13
The default vSphere datastore to use.
14
Optional: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name>. If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster, omit this parameter.
15
Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.
Important

The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

16
The pull secret that you obtained from OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
17
The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS).

1.5.8.2. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Prerequisites

  • You have an existing install-config.yaml file.
  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    Note

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure

  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 1
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 2
      noProxy: example.com 3
    additionalTrustBundle: | 4
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    ...
    1
    A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2
    A proxy URL to use for creating HTTPS connections outside the cluster.
    3
    A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations. You must include vCenter’s IP address and the IP range that you use for its machines.
    4
    If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
    Note

    The installation program does not support the proxy readinessEndpoints field.

  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Note

Only the Proxy object named cluster is supported, and no additional proxies can be created.

1.5.9. Network configuration phases

When specifying a cluster configuration prior to installation, there are several phases in the installation procedures when you can modify the network configuration:

Phase 1

After entering the openshift-install create install-config command. In the install-config.yaml file, you can customize the following network-related fields:

  • networking.networkType
  • networking.clusterNetwork
  • networking.serviceNetwork
  • networking.machineNetwork

    For more information on these fields, refer to "Installation configuration parameters".

    Note

    Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

Phase 2
After entering the openshift-install create manifests command. If you must specify advanced network configuration, during this phase you can define a customized Cluster Network Operator manifest with only the fields you want to modify.

You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2.

1.5.10. Specifying advanced network configuration

You can use advanced configuration customization to integrate your cluster into your existing network environment by specifying additional configuration for your cluster network provider. You can specify advanced network configuration only before you install the cluster.

Important

Modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported.

Prerequisites

  • Create the install-config.yaml file and complete any modifications to it.
  • Create the Ignition config files for your cluster.

Procedure

  1. Change to the directory that contains the installation program and create the manifests:

    $ ./openshift-install create manifests --dir <installation_directory>

    where:

    <installation_directory>
    Specifies the name of the directory that contains the install-config.yaml file for your cluster.
  2. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory:

    $ cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml
    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
    EOF

    where:

    <installation_directory>
    Specifies the directory name that contains the manifests/ directory for your cluster.
  3. Open the cluster-network-03-config.yml file in an editor and specify the advanced network configuration for your cluster, such as in the following example:

    Specify a different VXLAN port for the OpenShift SDN network provider

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      defaultNetwork:
        openshiftSDNConfig:
          vxlanPort: 4800

  4. Save the cluster-network-03-config.yml file and quit the text editor.
  5. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster.
  6. Remove the Kubernetes manifest files that define the control plane machines and compute machineSets:

    $ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml

    Because you create and manage these resources yourself, you do not have to initialize them.

    • You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment.

1.5.11. Cluster Network Operator configuration

The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster. The CR specifies the fields for the Network API in the operator.openshift.io API group.

The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed:

clusterNetwork
IP address pools from which pod IP addresses are allocated.
serviceNetwork
IP address pool for services.
defaultNetwork.type
Cluster network provider, such as OpenShift SDN or OVN-Kubernetes.

You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster.

1.5.11.1. Cluster Network Operator configuration object

The fields for the Cluster Network Operator (CNO) are described in the following table:

Table 1.45. Cluster Network Operator configuration object
FieldTypeDescription

metadata.name

string

The name of the CNO object. This name is always cluster.

spec.clusterNetwork

array

A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example:

spec:
  clusterNetwork:
  - cidr: 10.128.0.0/19
    hostPrefix: 23
  - cidr: 10.128.32.0/19
    hostPrefix: 23

This value is ready-only and specified in the install-config.yaml file.

spec.serviceNetwork

array

A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example:

spec:
  serviceNetwork:
  - 172.30.0.0/14

This value is ready-only and specified in the install-config.yaml file.

spec.defaultNetwork

object

Configures the Container Network Interface (CNI) cluster network provider for the cluster network.

spec.kubeProxyConfig

object

The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect.

defaultNetwork object configuration

The values for the defaultNetwork object are defined in the following table:

Table 1.46. defaultNetwork object
FieldTypeDescription

type

string

Either OpenShiftSDN or OVNKubernetes. The cluster network provider is selected during installation. This value cannot be changed after cluster installation.

Note

OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default.

openshiftSDNConfig

object

This object is only valid for the OpenShift SDN cluster network provider.

ovnKubernetesConfig

object

This object is only valid for the OVN-Kubernetes cluster network provider.

Configuration for the OpenShift SDN CNI cluster network provider

The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider.

Table 1.47. openshiftSDNConfig object
FieldTypeDescription

mode

string

Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy.

The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation.

mtu

integer

The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU.

If the auto-detected value is not what you expected it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.

If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001, and some have an MTU of 1500, you must set this value to 1450.

This value cannot be changed after cluster installation.

vxlanPort

integer

The port to use for all VXLAN packets. The default value is 4789. This value cannot be changed after cluster installation.

If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number.

On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999.

Example OpenShift SDN configuration

defaultNetwork:
  type: OpenShiftSDN
  openshiftSDNConfig:
    mode: NetworkPolicy
    mtu: 1450
    vxlanPort: 4789

Configuration for the OVN-Kubernetes CNI cluster network provider

The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider.

Table 1.48. ovnKubernetesConfig object
FieldTypeDescription

mtu

integer

The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU.

If the auto-detected value is not what you expected it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.

If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001, and some have an MTU of 1500, you must set this value to 1400.

This value cannot be changed after cluster installation.

genevePort

integer

The port to use for all Geneve packets. The default value is 6081. This value cannot be changed after cluster installation.

Example OVN-Kubernetes configuration

defaultNetwork:
  type: OVNKubernetes
  ovnKubernetesConfig:
    mtu: 1400
    genevePort: 6081

kubeProxyConfig object configuration

The values for the kubeProxyConfig object are defined in the following table:

Table 1.49. kubeProxyConfig object
FieldTypeDescription

iptablesSyncPeriod

string

The refresh period for iptables rules. The default value is 30s. Valid suffixes include s, m, and h and are described in the Go time package documentation.

Note

Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary.

proxyArguments.iptables-min-sync-period

array

The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s, m, and h and are described in the Go time package. The default value is:

kubeProxyConfig:
  proxyArguments:
    iptables-min-sync-period:
    - 0s

1.5.12. Creating the Ignition config files

Because you must manually start the cluster machines, you must generate the Ignition config files that the cluster needs to make its machines.

Important
  • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  • Obtain the Ignition config files:

    $ ./openshift-install create ignition-configs --dir <installation_directory> 1
    1
    For <installation_directory>, specify the directory name to store the files that the installation program creates.
    Important

    If you created an install-config.yaml file, specify the directory that contains it. Otherwise, specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    The following files are generated in the directory:

    .
    ├── auth
    │   ├── kubeadmin-password
    │   └── kubeconfig
    ├── bootstrap.ign
    ├── master.ign
    ├── metadata.json
    └── worker.ign

1.5.13. Extracting the infrastructure name

The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware vSphere. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it.

Prerequisites

  • You obtained the OpenShift Container Platform installation program and the pull secret for your cluster.
  • You generated the Ignition config files for your cluster.
  • You installed the jq package.

Procedure

  • To extract and view the infrastructure name from the Ignition config file metadata, run the following command:

    $ jq -r .infraID <installation_directory>/metadata.json 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.

    Example output

    openshift-vw9j6 1

    1
    The output of this command is your cluster name and a random string.

1.5.14. Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere

Before you install a cluster that contains user-provisioned infrastructure on VMware vSphere, you must create RHCOS machines on vSphere hosts for it to use.

Prerequisites

  • You have obtained the Ignition config files for your cluster.
  • You have access to an HTTP server that you can access from your computer and that the machines that you create can access.
  • You have created a vSphere cluster.

Procedure

  1. Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign, that the installation program created to your HTTP server. Note the URL of this file.
  2. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign:

    {
      "ignition": {
        "config": {
          "merge": [
            {
              "source": "<bootstrap_ignition_config_url>", 1
              "verification": {}
            }
          ]
        },
        "timeouts": {},
        "version": "3.1.0"
      },
      "networkd": {},
      "passwd": {},
      "storage": {},
      "systemd": {}
    }
    1
    Specify the URL of the bootstrap Ignition config file that you hosted.

    When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file.

  3. Locate the following Ignition config files that the installation program created:

    • <installation_directory>/master.ign
    • <installation_directory>/worker.ign
    • <installation_directory>/merge-bootstrap.ign
  4. Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM.

    For example, if you use a Linux operating system, you can use the base64 command to encode the files.

    $ base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64
    $ base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64
    $ base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64
    Important

    If you plan to add more compute machines to your cluster after you finish installation, do not delete these files.

  5. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page.

    Important

    The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available.

    The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova.

  6. In the vSphere Client, create a folder in your datacenter to store your VMs.

    1. Click the VMs and Templates view.
    2. Right-click the name of your datacenter.
    3. Click New Folder New VM and Template Folder.
    4. In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration.
  7. In the vSphere Client, create a template for the OVA image and then clone the template as needed.

    Note

    In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs.

    1. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template.
    2. On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded.
    3. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS. Click the name of your vSphere cluster and select the folder you created in the previous step.
    4. On the Select a compute resource tab, click the name of your vSphere cluster.
    5. On the Select storage tab, configure the storage options for your VM.

      • Select Thin Provision or Thick Provision, based on your storage preferences.
      • Select the datastore that you specified in your install-config.yaml file.
    6. On the Select network tab, specify the network that you configured for the cluster, if available.
    7. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further.

      Important

      Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that machine sets can apply configurations to.

  8. After the template deploys, deploy a VM for a machine in the cluster.

    1. Right-click the template name and click Clone Clone to Virtual Machine.
    2. On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1.
    3. On the Select a name and folder tab, select the name of the folder that you created for the cluster.
    4. On the Select a compute resource tab, select the name of a host in your datacenter.

      For a bootstrap machine, specify the URL of the bootstrap Ignition config file that you hosted.

    5. Optional: On the Select storage tab, customize the storage options.
    6. On the Select clone options, select Customize this virtual machine’s hardware.
    7. On the Customize hardware tab, click VM Options Advanced.

      • Optional: Override default DHCP networking in vSphere. To enable static IP networking:

        1. Set your static IP configuration:

          $ export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]"

          Example command

          $ export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8"

        2. Set the guestinfo.afterburn.initrd.network-kargs property before booting a VM from an OVA in vSphere:

          $ govc vm.change -vm "<vm_name>" -e "guestinfo.afterburn.initrd.network-kargs=${IPCFG}"
      • Optional: In the event of cluster performance issues, from the Latency Sensitivity list, select High. Ensure that your VM’s CPU and memory reservation have the following values:

        • Memory reservation value must be equal to its configured memory size.
        • CPU reservation value must be at least the number of low latency virtual CPUs multiplied by the measured physical CPU speed.
      • Click Edit Configuration, and on the Configuration Parameters window, click Add Configuration Params. Define the following parameter names and values:

        • guestinfo.ignition.config.data: Locate the base-64 encoded files that you created previously in this procedure, and paste the contents of the base64-encoded Ignition config file for this machine type.
        • guestinfo.ignition.config.data.encoding: Specify base64.
        • disk.EnableUUID: Specify TRUE.
    8. In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type.
    9. Complete the configuration and power on the VM.
  9. Create the rest of the machines for your cluster by following the preceding steps for each machine.

    Important

    You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster.

1.5.15. Creating more Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere

You can create more compute machines for your cluster that uses user-provisioned infrastructure on VMware vSphere.

Prerequisites

  • Obtain the base64-encoded Ignition file for your compute machines.
  • You have access to the vSphere template that you created for your cluster.

Procedure

  1. After the template deploys, deploy a VM for a machine in the cluster.

    1. Right-click the template’s name and click Clone Clone to Virtual Machine.
    2. On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1.
    3. On the Select a name and folder tab, select the name of the folder that you created for the cluster.
    4. On the Select a compute resource tab, select the name of a host in your datacenter.
    5. Optional: On the Select storage tab, customize the storage options.
    6. On the Select clone options, select Customize this virtual machine’s hardware.
    7. On the Customize hardware tab, click VM Options Advanced.

      • From the Latency Sensitivity list, select High.
      • Click Edit Configuration, and on the Configuration Parameters window, click Add Configuration Params. Define the following parameter names and values:

        • guestinfo.ignition.config.data: Paste the contents of the base64-encoded compute Ignition config file for this machine type.
        • guestinfo.ignition.config.data.encoding: Specify base64.
        • disk.EnableUUID: Specify TRUE.
    8. In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Also, make sure to select the correct network under Add network adapter if there are multiple networks available.
    9. Complete the configuration and power on the VM.
  2. Continue to create more compute machines for your cluster.

1.5.16. Disk partitioning

In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions.

However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node:

  • Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var, such as /var/lib/etcd, a separate partition, but not both.

    Important

    Kubernetes supports only two filesystem partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them.

  • Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your previous operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions.

Creating a separate /var partition

In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow.

OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var. For example:

  • /var/lib/containers: Holds container-related content that can grow as more images and containers are added to a system.
  • /var/lib/etcd: Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage.
  • /var: Holds data that you might want to keep separate for purposes such as auditing.

Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems.

Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation.

Procedure

  1. Create a directory to hold the OpenShift Container Platform installation files:

    $ mkdir $HOME/clusterconfig
  2. Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted:

    $ openshift-install create manifests --dir $HOME/clusterconfig
    ? SSH Public Key ...
    $ ls $HOME/clusterconfig/openshift/
    99_kubeadmin-password-secret.yaml
    99_openshift-cluster-api_master-machines-0.yaml
    99_openshift-cluster-api_master-machines-1.yaml
    99_openshift-cluster-api_master-machines-2.yaml
    ...
  3. Create a MachineConfig object and add it to a file in the openshift directory. For example, name the file 98-var-partition.yaml, change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
      name: 98-var-partition
    spec:
      config:
        ignition:
          version: 3.1.0
        storage:
          disks:
          - device: /dev/<device_name> 1
            partitions:
            - label: var
              startMiB: <partition_start_offset> 2
              sizeMiB: <partition_size> 3
          filesystems:
            - device: /dev/disk/by-partlabel/var
              path: /var
              format: xfs
        systemd:
          units:
            - name: var.mount 4
              enabled: true
              contents: |
                [Unit]
                Before=local-fs.target
                [Mount]
                What=/dev/disk/by-partlabel/var
                Where=/var
                Options=defaults,prjquota 5
                [Install]
                WantedBy=local-fs.target
    1
    The storage device name of the disk that you want to partition.
    2
    When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition.
    3
    The size of the data partition in mebibytes.
    4
    The name of the mount unit must match the directory specified in the Where= directive. For example, for a filesystem mounted on /var/lib/containers, the unit must be named var-lib-containers.mount.
    5
    The prjquota mount option must be enabled for filesystems used for container storage.
    Note

    When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name.

  4. Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories:

    $ openshift-install create ignition-configs --dir $HOME/clusterconfig
    $ ls $HOME/clusterconfig/
    auth  bootstrap.ign  master.ign  metadata.json  worker.ign

Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems.

1.5.17. Creating the cluster

To create the OpenShift Container Platform cluster, you wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program.

Prerequisites

  • Create the required infrastructure for the cluster.
  • You obtained the installation program and generated the Ignition config files for your cluster.
  • You used the Ignition config files to create RHCOS machines for your cluster.
  • Your machines have direct Internet access or have an HTTP or HTTPS proxy available.

Procedure

  1. Monitor the bootstrap process:

    $ ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1
        --log-level=info 2
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
    2
    To view different installation details, specify warn, debug, or error instead of info.

    Example output

    INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443...
    INFO API v1.19.0 up
    INFO Waiting up to 30m0s for bootstrapping to complete...
    INFO It is now safe to remove the bootstrap resources

    The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines.

  2. After bootstrap process is complete, remove the bootstrap machine from the load balancer.

    Important

    You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the machine itself.

1.5.18. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

1.5.19. Approving the certificate signing requests for your machines

When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.

Prerequisites

  • You added machines to your cluster.

Procedure

  1. Confirm that the cluster recognizes the machines:

    $ oc get nodes

    Example output

    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  63m  v1.19.0
    master-1  Ready     master  63m  v1.19.0
    master-2  Ready     master  64m  v1.19.0

    The output lists all of the machines that you created.

    Note

    The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.

  2. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster:

    $ oc get csr

    Example output

    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-8b2br   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    csr-8vnps   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    ...

    In this example, two machines are joining the cluster. You might see more approved CSRs in the list.

  3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines:

    Note

    Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters.

    Note

    For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec, oc rsh, and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node.

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> 1
      1
      <csr_name> is the name of a CSR from the list of current CSRs.
    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
      Note

      Some Operators might not become available until some CSRs are approved.

  4. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:

    $ oc get csr

    Example output

    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-bfd72   5m26s   system:node:ip-10-0-50-126.us-east-2.compute.internal                       Pending
    csr-c57lv   5m26s   system:node:ip-10-0-95-157.us-east-2.compute.internal                       Pending
    ...

  5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines:

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> 1
      1
      <csr_name> is the name of a CSR from the list of current CSRs.
    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
  6. After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command:

    $ oc get nodes

    Example output

    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  73m  v1.20.0
    master-1  Ready     master  73m  v1.20.0
    master-2  Ready     master  74m  v1.20.0
    worker-0  Ready     worker  11m  v1.20.0
    worker-1  Ready     worker  11m  v1.20.0

    Note

    It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status.

Additional information

1.5.19.1. Initial Operator configuration

After the control plane initializes, you must immediately configure some Operators so that they all become available.

Prerequisites

  • Your control plane has initialized.

Procedure

  1. Watch the cluster components come online:

    $ watch -n5 oc get clusteroperators

    Example output

    NAME                                       VERSION AVAILABLE   PROGRESSING   DEGRADED   SINCE
    authentication                             4.6.0   True        False         False      3h56m
    cloud-credential                           4.6.0   True        False         False      29h
    cluster-autoscaler                         4.6.0   True        False         False      29h
    config-operator                            4.6.0   True        False         False      6h39m
    console                                    4.6.0   True        False         False      3h59m
    csi-snapshot-controller                    4.6.0   True        False         False      4h12m
    dns                                        4.6.0   True        False         False      4h15m
    etcd                                       4.6.0   True        False         False      29h
    image-registry                             4.6.0   True        False         False      3h59m
    ingress                                    4.6.0   True        False         False      4h30m
    insights                                   4.6.0   True        False         False      29h
    kube-apiserver                             4.6.0   True        False         False      29h
    kube-controller-manager                    4.6.0   True        False         False      29h
    kube-scheduler                             4.6.0   True        False         False      29h
    kube-storage-version-migrator              4.6.0   True        False         False      4h2m
    machine-api                                4.6.0   True        False         False      29h
    machine-approver                           4.6.0   True        False         False      6h34m
    machine-config                             4.6.0   True        False         False      3h56m
    marketplace                                4.6.0   True        False         False      4h2m
    monitoring                                 4.6.0   True        False         False      6h31m
    network                                    4.6.0   True        False         False      29h
    node-tuning                                4.6.0   True        False         False      4h30m
    openshift-apiserver                        4.6.0   True        False         False      3h56m
    openshift-controller-manager               4.6.0   True        False         False      4h36m
    openshift-samples                          4.6.0   True        False         False      4h30m
    operator-lifecycle-manager                 4.6.0   True        False         False      29h
    operator-lifecycle-manager-catalog         4.6.0   True        False         False      29h
    operator-lifecycle-manager-packageserver   4.6.0   True        False         False      3h59m
    service-ca                                 4.6.0   True        False         False      29h
    storage                                    4.6.0   True        False         False      4h30m

  2. Configure the Operators that are not available.

1.5.19.2. Image registry removed during installation

On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed. This allows openshift-installer to complete installations on these platform types.

After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed.

Note

The Prometheus console provides an ImageRegistryRemoved alert, for example:

"Image Registry has been removed. ImageStreamTags, BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io."

1.5.19.3. Image registry storage configuration

The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available.

Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters.

Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades.

1.5.19.3.1. Configuring block registry storage for VMware vSphere

To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy.

Important

Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica.

Procedure

  1. To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica:

    $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}'
  2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode.

    1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object:

      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: image-registry-storage 1
        namespace: openshift-image-registry 2
      spec:
        accessModes:
        - ReadWriteOnce 3
        resources:
          requests:
            storage: 100Gi 4
      1
      A unique name that represents the PersistentVolumeClaim object.
      2
      The namespace for the PersistentVolumeClaim object, which is openshift-image-registry.
      3
      The access mode of the persistent volume claim. With ReadWriteOnce, the volume can be mounted with read and write permissions by a single node.
      4
      The size of the persistent volume claim.
    2. Create the PersistentVolumeClaim object from the file:

      $ oc create -f pvc.yaml -n openshift-image-registry
  3. Edit the registry configuration so that it references the correct PVC:

    $ oc edit config.imageregistry.operator.openshift.io -o yaml

    Example output

    storage:
      pvc:
        claim: 1

    1
    Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC.

For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere.

1.5.20. Completing installation on user-provisioned infrastructure

After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide.

Prerequisites

  • Your control plane has initialized.
  • You have completed the initial Operator configuration.

Procedure

  1. Confirm that all the cluster components are online with the following command:

    $ watch -n5 oc get clusteroperators

    Example output

    NAME                                       VERSION AVAILABLE   PROGRESSING   DEGRADED   SINCE
    authentication                             4.6.0   True        False         False      3h56m
    cloud-credential                           4.6.0   True        False         False      29h
    cluster-autoscaler                         4.6.0   True        False         False      29h
    config-operator                            4.6.0   True        False         False      6h39m
    console                                    4.6.0   True        False         False      3h59m
    csi-snapshot-controller                    4.6.0   True        False         False      4h12m
    dns                                        4.6.0   True        False         False      4h15m
    etcd                                       4.6.0   True        False         False      29h
    image-registry                             4.6.0   True        False         False      3h59m
    ingress                                    4.6.0   True        False         False      4h30m
    insights                                   4.6.0   True        False         False      29h
    kube-apiserver                             4.6.0   True        False         False      29h
    kube-controller-manager                    4.6.0   True        False         False      29h
    kube-scheduler                             4.6.0   True        False         False      29h
    kube-storage-version-migrator              4.6.0   True        False         False      4h2m
    machine-api                                4.6.0   True        False         False      29h
    machine-approver                           4.6.0   True        False         False      6h34m
    machine-config                             4.6.0   True        False         False      3h56m
    marketplace                                4.6.0   True        False         False      4h2m
    monitoring                                 4.6.0   True        False         False      6h31m
    network                                    4.6.0   True        False         False      29h
    node-tuning                                4.6.0   True        False         False      4h30m
    openshift-apiserver                        4.6.0   True        False         False      3h56m
    openshift-controller-manager               4.6.0   True        False         False      4h36m
    openshift-samples                          4.6.0   True        False         False      4h30m
    operator-lifecycle-manager                 4.6.0   True        False         False      29h
    operator-lifecycle-manager-catalog         4.6.0   True        False         False      29h
    operator-lifecycle-manager-packageserver   4.6.0   True        False         False      3h59m
    service-ca                                 4.6.0   True        False         False      29h
    storage                                    4.6.0   True        False         False      4h30m

    Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials:

    $ ./openshift-install --dir <installation_directory> wait-for install-complete 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.

    Example output

    INFO Waiting up to 30m0s for the cluster to initialize...

    The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server.

    Important
    • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
    • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
  2. Confirm that the Kubernetes API server is communicating with the pods.

    1. To view a list of all pods, use the following command:

      $ oc get pods --all-namespaces

      Example output

      NAMESPACE                         NAME                                            READY   STATUS      RESTARTS   AGE
      openshift-apiserver-operator      openshift-apiserver-operator-85cb746d55-zqhs8   1/1     Running     1          9m
      openshift-apiserver               apiserver-67b9g                                 1/1     Running     0          3m
      openshift-apiserver               apiserver-ljcmx                                 1/1     Running     0          1m
      openshift-apiserver               apiserver-z25h4                                 1/1     Running     0          2m
      openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8        1/1     Running     0          5m
      ...

    2. View the logs for a pod that is listed in the output of the previous command by using the following command:

      $ oc logs <pod_name> -n <namespace> 1
      1
      Specify the pod name and namespace, as shown in the output of the previous command.

      If the pod logs display, the Kubernetes API server can communicate with the cluster machines.

You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere.

1.5.21. Backing up VMware vSphere volumes

OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information.

Procedure

To create a backup of persistent volumes:

  1. Stop the application that is using the persistent volume.
  2. Clone the persistent volume.
  3. Restart the application.
  4. Create a backup of the cloned volume.
  5. Delete the cloned volume.

1.5.22. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.6, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

1.5.23. Next steps

1.6. Installing a cluster on vSphere in a restricted network with user-provisioned infrastructure

In OpenShift Container Platform version 4.6, you can install a cluster on VMware vSphere infrastructure that you provision in a restricted network.

Important

The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods.

1.6.1. Prerequisites

  • Create a registry on your mirror host and obtain the imageContentSources data for your version of OpenShift Container Platform.

    Important

    Because the installation media is on the mirror host, you can use that computer to complete all installation steps.

  • Provision persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes.
  • Review details about the OpenShift Container Platform installation and update processes.
  • Completing the installation requires that you upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA on vSphere hosts. The machine from which you complete this process requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible.
  • If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed.
  • If you use a firewall and plan to use telemetry, you must configure the firewall to allow the sites that your cluster requires access to.

    Note

    Be sure to also review this site list if you are configuring a proxy.

1.6.2. About installations in restricted networks

In OpenShift Container Platform 4.6, you can perform an installation that does not require an active connection to the Internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster.

If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service’s Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less Internet access for an installation on bare metal hardware or on VMware vSphere.

To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift Container Platform registry and contains the installation media. You can create this registry on a mirror host, which can access both the Internet and your closed network, or by using other methods that meet your restrictions.

Important

Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network.

1.6.2.1. Additional limits

Clusters in restricted networks have the following additional limitations and restrictions:

  • The ClusterVersion status includes an Unable to retrieve available updates error.
  • By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags.

1.6.3. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.6, you require access to the Internet to obtain the images that are necessary to install your cluster.

You must have Internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry.

1.6.4. VMware vSphere infrastructure requirements

You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use.

Table 1.50. Minimum supported vSphere version for VMware components
ComponentMinimum supported versionsDescription

Hypervisor

vSphere 6.5 and later with HW version 13

This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list.

Storage with in-tree drivers

vSphere 6.5 and later

This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform.

Optional: Networking (NSX-T)

vSphere 6.5U3 or vSphere 6.7U2 and later

vSphere 6.5U3 or vSphere 6.7U2+ are required for OpenShift Container Platform. VMware’s NSX Container Plug-in (NCP) 3.0.2 is certified with OpenShift Container Platform 4.6 and NSX-T 3.x+.

If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform.

Important

You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation.

1.6.5. Machine requirements for a cluster with user-provisioned infrastructure

For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines.

1.6.5.1. Required machines

The smallest OpenShift Container Platform clusters require the following hosts:

  • One temporary bootstrap machine
  • Three control plane, or master, machines
  • At least two compute machines, which are also known as worker machines.
Note

The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster.

Important

To maintain high availability of your cluster, use separate physical hosts for these cluster machines.

The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 7.9.

Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits.

Important

All virtual machines must reside in the same datastore and in the same folder as the installer.

1.6.5.2. Network connectivity requirements

All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config files from the Machine Config Server. During the initial boot, the machines require either a DHCP server or that static IP addresses be set in order to establish a network connection to download their Ignition config files. Additionally, each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server. If a DHCP server provides NTP servers information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers.

1.6.5.3. Minimum resource requirements

Each cluster machine must meet the following minimum requirements:

MachineOperating SystemvCPU [1]Virtual RAMStorageIOPS [2]

Bootstrap

RHCOS

4

16 GB

100 GB

300

Control plane

RHCOS

4

16 GB

100 GB

300

Compute

RHCOS or RHEL 7.9

2

8 GB

100 GB

300

  1. One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
  2. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.

1.6.5.4. Certificate signing requests management

Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.

1.6.6. Creating the user-provisioned infrastructure

Before you deploy an OpenShift Container Platform cluster that uses user-provisioned infrastructure, you must create the underlying infrastructure.

Prerequisites

Procedure

  1. Configure DHCP or set static IP addresses on each node.
  2. Provision the required load balancers.
  3. Configure the ports for your machines.
  4. Configure DNS.
  5. Ensure network connectivity.

1.6.6.1. Networking requirements for user-provisioned infrastructure

All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config from the machine config server.

During the initial boot, the machines require either a DHCP server or that static IP addresses be set on each host in the cluster in order to establish a network connection, which allows them to download their Ignition config files.

It is recommended to use the DHCP server to manage the machines for the cluster long-term. Ensure that the DHCP server is configured to provide persistent IP addresses and host names to the cluster machines.

The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests.

You must configure the network connectivity between machines to allow cluster components to communicate. Each machine must be able to resolve the host names of all other machines in the cluster.

Table 1.51. All machines to all machines
ProtocolPortDescription

ICMP

N/A

Network reachability tests

TCP

1936

Metrics

9000-9999

Host level services, including the node exporter on ports 9100-9101 and the Cluster Version Operator on port 9099.

10250-10259

The default ports that Kubernetes reserves

10256

openshift-sdn

UDP

4789

VXLAN and Geneve

6081

VXLAN and Geneve

9000-9999

Host level services, including the node exporter on ports 9100-9101.

TCP/UDP

30000-32767

Kubernetes node port

Table 1.52. All machines to control plane
ProtocolPortDescription

TCP

6443

Kubernetes API

Table 1.53. Control plane machines to control plane machines
ProtocolPortDescription

TCP

2379-2380

etcd server and peer ports

Network topology requirements

The infrastructure that you provision for your cluster must meet the following network topology requirements.

Important

OpenShift Container Platform requires all nodes to have internet access to pull images for platform containers and provide telemetry data to Red Hat.

Load balancers

Before you install OpenShift Container Platform, you must provision two load balancers that meet the following requirements:

  1. API load balancer: Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions:

    • Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes.
    • A stateless load balancing algorithm. The options vary based on the load balancer implementation.
    Important

    Do not configure session persistence for an API load balancer.

    Configure the following ports on both the front and back of the load balancers:

    Table 1.54. API load balancer
    PortBack-end machines (pool members)InternalExternalDescription

    6443

    Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe.

    X

    X

    Kubernetes API server

    22623

    Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane.

    X

     

    Machine config server

    Note

    The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values.

  2. Application Ingress load balancer: Provides an Ingress point for application traffic flowing in from outside the cluster. Configure the following conditions:

    • Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the Ingress routes.
    • A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform.

    Configure the following ports on both the front and back of the load balancers:

    Table 1.55. Application Ingress load balancer
    PortBack-end machines (pool members)InternalExternalDescription

    443

    The machines that run the Ingress router pods, compute, or worker, by default.

    X

    X

    HTTPS traffic

    80

    The machines that run the Ingress router pods, compute, or worker, by default.

    X

    X

    HTTP traffic

Tip

If the true IP address of the client can be seen by the load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption.

Note

A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes.

Ethernet adaptor hardware address requirements

When provisioning VMs for the cluster, the ethernet interfaces configured for each VM must use a MAC address from the VMware Organizationally Unique Identifier (OUI) allocation ranges:

  • 00:05:69:00:00:00 to 00:05:69:FF:FF:FF
  • 00:0c:29:00:00:00 to 00:0c:29:FF:FF:FF
  • 00:1c:14:00:00:00 to 00:1c:14:FF:FF:FF
  • 00:50:56:00:00:00 to 00:50:56:FF:FF:FF

If a MAC address outside the VMware OUI is used, the cluster installation will not succeed.

NTP configuration

OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service.

If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers.

Additional resources

1.6.6.2. User-provisioned DNS requirements

DNS is used for name resolution and reverse name resolution. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the host name for all the nodes. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate.

The following DNS records are required for an OpenShift Container Platform cluster that uses user-provisioned infrastructure. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>..

Table 1.56. Required DNS records
ComponentRecordDescription

Kubernetes API

api.<cluster_name>.<base_domain>.

Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the load balancer for the control plane machines. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.

api-int.<cluster_name>.<base_domain>.

Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the load balancer for the control plane machines. These records must be resolvable from all the nodes within the cluster.

Important

The API server must be able to resolve the worker nodes by the host names that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods.

Routes

*.apps.<cluster_name>.<base_domain>.

Add a wildcard DNS A/AAAA or CNAME record that refers to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.

Bootstrap

bootstrap.<cluster_name>.<base_domain>.

Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster.

Master hosts

<master><n>.<cluster_name>.<base_domain>.

DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes (also known as the master nodes). These records must be resolvable by the nodes within the cluster.

Worker hosts

<worker><n>.<cluster_name>.<base_domain>.

Add DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster.

Tip

You can use the nslookup <hostname> command to verify name resolution. You can use the dig -x <ip_address> command to verify reverse name resolution for the PTR records.

The following example of a BIND zone file shows sample A records for name resolution. The purpose of the example is to show the records that are needed. The example is not meant to provide advice for choosing one name resolution service over another.

Example 1.11. Sample DNS zone database

$TTL 1W
@	IN	SOA	ns1.example.com.	root (
			2019070700	; serial
			3H		; refresh (3 hours)
			30M		; retry (30 minutes)
			2W		; expiry (2 weeks)
			1W )		; minimum (1 week)
	IN	NS	ns1.example.com.
	IN	MX 10	smtp.example.com.
;
;
ns1	IN	A	192.168.1.5
smtp	IN	A	192.168.1.5
;
helper	IN	A	192.168.1.5
helper.ocp4	IN	A	192.168.1.5
;
; The api identifies the IP of your load balancer.
api.ocp4		IN	A	192.168.1.5
api-int.ocp4		IN	A	192.168.1.5
;
; The wildcard also identifies the load balancer.
*.apps.ocp4		IN	A	192.168.1.5
;
; Create an entry for the bootstrap host.
bootstrap.ocp4	IN	A	192.168.1.96
;
; Create entries for the master hosts.
master0.ocp4		IN	A	192.168.1.97
master1.ocp4		IN	A	192.168.1.98
master2.ocp4		IN	A	192.168.1.99
;
; Create entries for the worker hosts.
worker0.ocp4		IN	A	192.168.1.11
worker1.ocp4		IN	A	192.168.1.7
;
;EOF

The following example BIND zone file shows sample PTR records for reverse name resolution.

Example 1.12. Sample DNS zone database for reverse records

$TTL 1W
@	IN	SOA	ns1.example.com.	root (
			2019070700	; serial
			3H		; refresh (3 hours)
			30M		; retry (30 minutes)
			2W		; expiry (2 weeks)
			1W )		; minimum (1 week)
	IN	NS	ns1.example.com.
;
; The syntax is "last octet" and the host must have an FQDN
; with a trailing dot.
97	IN	PTR	master0.ocp4.example.com.
98	IN	PTR	master1.ocp4.example.com.
99	IN	PTR	master2.ocp4.example.com.
;
96	IN	PTR	bootstrap.ocp4.example.com.
;
5	IN	PTR	api.ocp4.example.com.
5	IN	PTR	api-int.ocp4.example.com.
;
11	IN	PTR	worker0.ocp4.example.com.
7	IN	PTR	worker1.ocp4.example.com.
;
;EOF

1.6.7. Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues.

Note

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the key is added to the core user’s ~/.ssh/authorized_keys list.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' \
        -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.

    Running this command generates an SSH key that does not require a password in the location that you specified.

    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. Start the ssh-agent process as a background task:

    $ eval "$(ssh-agent -s)"

    Example output

    Agent pid 31874

    Note

    If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  3. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide this key to your cluster’s machines.

1.6.8. Manually creating the installation configuration file

For installations of OpenShift Container Platform that use user-provisioned infrastructure, you manually generate your installation configuration file.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the access token for your cluster.
  • Obtain the imageContentSources section from the output of the command to mirror the repository.
  • Obtain the contents of the certificate for your mirror registry.

Procedure

  1. Create an installation directory to store your required installation assets in:

    $ mkdir <installation_directory>
    Important

    You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

  2. Customize the following install-config.yaml file template and save it in the <installation_directory>.

    Note

    You must name this configuration file install-config.yaml.

    • Unless you use a registry that RHCOS trusts by default, such as docker.io, you must provide the contents of the certificate for your mirror repository in the additionalTrustBundle section. In most cases, you must provide the certificate for your mirror.
    • You must include the imageContentSources section from the output of the command to mirror the repository.
  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the next step of the installation process. You must back it up now.

1.6.8.1. Sample install-config.yaml file for VMware vSphere

You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.

apiVersion: v1
baseDomain: example.com 1
compute:
- hyperthreading: Enabled 2 3
  name: worker
  replicas: 0 4
controlPlane:
  hyperthreading: Enabled 5 6
  name: master
  replicas: 3 7
metadata:
  name: test 8
platform:
  vsphere:
    vcenter: your.vcenter.server 9
    username: username 10
    password: password 11
    datacenter: datacenter 12
    defaultDatastore: datastore 13
    folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 14
fips: false 15
pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "you@example.com"}}}' 16
sshKey: 'ssh-ed25519 AAAA...' 17
additionalTrustBundle: | 18
  -----BEGIN CERTIFICATE-----
  ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
  -----END CERTIFICATE-----
imageContentSources: 19
- mirrors:
  - <local_registry>/<local_repository_name>/release
  source: quay.io/openshift-release-dev/ocp-release
- mirrors:
  - <local_registry>/<local_repository_name>/release
  source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
1
The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name.
2 5
The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, -, and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used.
3 6
Whether to enable or disable simultaneous multithreading, or hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading.

4
You must set the value of the replicas parameter to 0. This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform.
7
The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy.
8
The cluster name that you specified in your DNS records.
9
The fully-qualified hostname or IP address of the vCenter server.
10
The name of the user for accessing the server. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere.
11
The password associated with the vSphere user.
12
The vSphere datacenter.
13
The default vSphere datastore to use.
14
Optional: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name>. If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster, omit this parameter.
15
Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.
Important

The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

16
For <local_registry>, specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000. For <credentials>, specify the base64-encoded user name and password for your mirror registry.
17
The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS).
Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

18
Provide the contents of the certificate file that you used for your mirror registry.
19
Provide the imageContentSources section from the output of the command to mirror the repository.

1.6.8.2. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Prerequisites

  • You have an existing install-config.yaml file.
  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    Note

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure

  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 1
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 2
      noProxy: example.com 3
    additionalTrustBundle: | 4
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    ...
    1
    A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2
    A proxy URL to use for creating HTTPS connections outside the cluster.
    3
    A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations. You must include vCenter’s IP address and the IP range that you use for its machines.
    4
    If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
    Note

    The installation program does not support the proxy readinessEndpoints field.

  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Note

Only the Proxy object named cluster is supported, and no additional proxies can be created.

1.6.9. Creating the Kubernetes manifest and Ignition config files

Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines.

The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to create the cluster.

Important
  • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

Prerequisites

  • You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host.
  • You created the install-config.yaml installation configuration file.

Procedure

  1. Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster:

    $ ./openshift-install create manifests --dir <installation_directory> 1
    1
    For <installation_directory>, specify the installation directory that contains the install-config.yaml file you created.
  2. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets:

    $ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml

    Because you create and manage these resources yourself, you do not have to initialize them.

    • You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment.
  3. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false. This setting prevents pods from being scheduled on the control plane machines:

    1. Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file.
    2. Locate the mastersSchedulable parameter and ensure that it is set to false.
    3. Save and exit the file.
  4. To create the Ignition configuration files, run the following command from the directory that contains the installation program:

    $ ./openshift-install create ignition-configs --dir <installation_directory> 1
    1
    For <installation_directory>, specify the same installation directory.

    The following files are generated in the directory:

    .
    ├── auth
    │   ├── kubeadmin-password
    │   └── kubeconfig
    ├── bootstrap.ign
    ├── master.ign
    ├── metadata.json
    └── worker.ign

1.6.10. Configuring chrony time service

You must set the time server and related settings used by the chrony time service (chronyd) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config.

Procedure

  1. Create the contents of the chrony.conf file and encode it as base64. For example:

    $ cat << EOF | base64
        pool 0.rhel.pool.ntp.org iburst 1
        driftfile /var/lib/chrony/drift
        makestep 1.0 3
        rtcsync
        logdir /var/log/chrony
    EOF
    1
    Specify any valid, reachable time source, such as the one provided by your DHCP server.

    Example output

    ICAgIHNlcnZlciBjbG9jay5yZWRoYXQuY29tIGlidXJzdAogICAgZHJpZnRmaWxlIC92YXIvbGli
    L2Nocm9ueS9kcmlmdAogICAgbWFrZXN0ZXAgMS4wIDMKICAgIHJ0Y3N5bmMKICAgIGxvZ2RpciAv
    dmFyL2xvZy9jaHJvbnkK

  2. Create the MachineConfig object file, replacing the base64 string with the one you just created. This example adds the file to master nodes. You can change it to worker or make an additional MachineConfig for the worker role. Create MachineConfig files for each type of machine that your cluster uses:

    $ cat << EOF > ./99-masters-chrony-configuration.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: master
      name: 99-masters-chrony-configuration
    spec:
      config:
        ignition:
          config: {}
          security:
            tls: {}
          timeouts: {}
          version: 3.1.0
        networkd: {}
        passwd: {}
        storage:
          files:
          - contents:
              source: data:text/plain;charset=utf-8;base64,ICAgIHNlcnZlciBjbG9jay5yZWRoYXQuY29tIGlidXJzdAogICAgZHJpZnRmaWxlIC92YXIvbGliL2Nocm9ueS9kcmlmdAogICAgbWFrZXN0ZXAgMS4wIDMKICAgIHJ0Y3N5bmMKICAgIGxvZ2RpciAvdmFyL2xvZy9jaHJvbnkK
            mode: 420 1
            overwrite: true
            path: /etc/chrony.conf
      osImageURL: ""
    EOF
    1
    Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml.
  3. Make a backup copy of the configuration files.
  4. Apply the configurations in one of two ways:

    • If the cluster is not up yet, after you generate manifest files, add this file to the <installation_directory>/openshift directory, and then continue to create the cluster.
    • If the cluster is already running, apply the file:

      $ oc apply -f ./99-masters-chrony-configuration.yaml

1.6.11. Extracting the infrastructure name

The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware vSphere. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it.

Prerequisites

  • You obtained the OpenShift Container Platform installation program and the pull secret for your cluster.
  • You generated the Ignition config files for your cluster.
  • You installed the jq package.

Procedure

  • To extract and view the infrastructure name from the Ignition config file metadata, run the following command:

    $ jq -r .infraID <installation_directory>/metadata.json 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.

    Example output

    openshift-vw9j6 1

    1
    The output of this command is your cluster name and a random string.

1.6.12. Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere

Before you install a cluster that contains user-provisioned infrastructure on VMware vSphere, you must create RHCOS machines on vSphere hosts for it to use.

Prerequisites

  • You have obtained the Ignition config files for your cluster.
  • You have access to an HTTP server that you can access from your computer and that the machines that you create can access.
  • You have created a vSphere cluster.

Procedure

  1. Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign, that the installation program created to your HTTP server. Note the URL of this file.
  2. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign:

    {
      "ignition": {
        "config": {
          "merge": [
            {
              "source": "<bootstrap_ignition_config_url>", 1
              "verification": {}
            }
          ]
        },
        "timeouts": {},
        "version": "3.1.0"
      },
      "networkd": {},
      "passwd": {},
      "storage": {},
      "systemd": {}
    }
    1
    Specify the URL of the bootstrap Ignition config file that you hosted.

    When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file.

  3. Locate the following Ignition config files that the installation program created:

    • <installation_directory>/master.ign
    • <installation_directory>/worker.ign
    • <installation_directory>/merge-bootstrap.ign
  4. Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM.

    For example, if you use a Linux operating system, you can use the base64 command to encode the files.

    $ base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64
    $ base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64
    $ base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64
    Important

    If you plan to add more compute machines to your cluster after you finish installation, do not delete these files.

  5. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page.

    Important

    The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available.

    The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova.

  6. In the vSphere Client, create a folder in your datacenter to store your VMs.

    1. Click the VMs and Templates view.
    2. Right-click the name of your datacenter.
    3. Click New Folder New VM and Template Folder.
    4. In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration.
  7. In the vSphere Client, create a template for the OVA image and then clone the template as needed.

    Note

    In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs.

    1. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template.
    2. On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded.
    3. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS. Click the name of your vSphere cluster and select the folder you created in the previous step.
    4. On the Select a compute resource tab, click the name of your vSphere cluster.
    5. On the Select storage tab, configure the storage options for your VM.

      • Select Thin Provision or Thick Provision, based on your storage preferences.
      • Select the datastore that you specified in your install-config.yaml file.
    6. On the Select network tab, specify the network that you configured for the cluster, if available.
    7. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further.

      Important

      Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that machine sets can apply configurations to.

  8. After the template deploys, deploy a VM for a machine in the cluster.

    1. Right-click the template name and click Clone Clone to Virtual Machine.
    2. On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1.
    3. On the Select a name and folder tab, select the name of the folder that you created for the cluster.
    4. On the Select a compute resource tab, select the name of a host in your datacenter.

      For a bootstrap machine, specify the URL of the bootstrap Ignition config file that you hosted.

    5. Optional: On the Select storage tab, customize the storage options.
    6. On the Select clone options, select Customize this virtual machine’s hardware.
    7. On the Customize hardware tab, click VM Options Advanced.

      • Optional: Override default DHCP networking in vSphere. To enable static IP networking:

        1. Set your static IP configuration:

          $ export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]"

          Example command

          $ export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8"

        2. Set the guestinfo.afterburn.initrd.network-kargs property before booting a VM from an OVA in vSphere:

          $ govc vm.change -vm "<vm_name>" -e "guestinfo.afterburn.initrd.network-kargs=${IPCFG}"
      • Optional: In the event of cluster performance issues, from the Latency Sensitivity list, select High. Ensure that your VM’s CPU and memory reservation have the following values:

        • Memory reservation value must be equal to its configured memory size.
        • CPU reservation value must be at least the number of low latency virtual CPUs multiplied by the measured physical CPU speed.
      • Click Edit Configuration, and on the Configuration Parameters window, click Add Configuration Params. Define the following parameter names and values:

        • guestinfo.ignition.config.data: Locate the base-64 encoded files that you created previously in this procedure, and paste the contents of the base64-encoded Ignition config file for this machine type.
        • guestinfo.ignition.config.data.encoding: Specify base64.
        • disk.EnableUUID: Specify TRUE.
    8. In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type.
    9. Complete the configuration and power on the VM.
  9. Create the rest of the machines for your cluster by following the preceding steps for each machine.

    Important

    You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster.

1.6.13. Creating more Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere

You can create more compute machines for your cluster that uses user-provisioned infrastructure on VMware vSphere.

Prerequisites

  • Obtain the base64-encoded Ignition file for your compute machines.
  • You have access to the vSphere template that you created for your cluster.

Procedure

  1. After the template deploys, deploy a VM for a machine in the cluster.

    1. Right-click the template’s name and click Clone Clone to Virtual Machine.
    2. On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1.
    3. On the Select a name and folder tab, select the name of the folder that you created for the cluster.
    4. On the Select a compute resource tab, select the name of a host in your datacenter.
    5. Optional: On the Select storage tab, customize the storage options.
    6. On the Select clone options, select Customize this virtual machine’s hardware.
    7. On the Customize hardware tab, click VM Options Advanced.

      • From the Latency Sensitivity list, select High.
      • Click Edit Configuration, and on the Configuration Parameters window, click Add Configuration Params. Define the following parameter names and values:

        • guestinfo.ignition.config.data: Paste the contents of the base64-encoded compute Ignition config file for this machine type.
        • guestinfo.ignition.config.data.encoding: Specify base64.
        • disk.EnableUUID: Specify TRUE.
    8. In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Also, make sure to select the correct network under Add network adapter if there are multiple networks available.
    9. Complete the configuration and power on the VM.
  2. Continue to create more compute machines for your cluster.

1.6.14. Disk partitioning

In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions.

However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node:

  • Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var, such as /var/lib/etcd, a separate partition, but not both.

    Important

    Kubernetes supports only two filesystem partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them.

  • Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your previous operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions.

Creating a separate /var partition

In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow.

OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var. For example:

  • /var/lib/containers: Holds container-related content that can grow as more images and containers are added to a system.
  • /var/lib/etcd: Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage.
  • /var: Holds data that you might want to keep separate for purposes such as auditing.

Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems.

Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation.

Procedure

  1. Create a directory to hold the OpenShift Container Platform installation files:

    $ mkdir $HOME/clusterconfig
  2. Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted:

    $ openshift-install create manifests --dir $HOME/clusterconfig
    ? SSH Public Key ...
    $ ls $HOME/clusterconfig/openshift/
    99_kubeadmin-password-secret.yaml
    99_openshift-cluster-api_master-machines-0.yaml
    99_openshift-cluster-api_master-machines-1.yaml
    99_openshift-cluster-api_master-machines-2.yaml
    ...
  3. Create a MachineConfig object and add it to a file in the openshift directory. For example, name the file 98-var-partition.yaml, change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
      name: 98-var-partition
    spec:
      config:
        ignition:
          version: 3.1.0
        storage:
          disks:
          - device: /dev/<device_name> 1
            partitions:
            - label: var
              startMiB: <partition_start_offset> 2
              sizeMiB: <partition_size> 3
          filesystems:
            - device: /dev/disk/by-partlabel/var
              path: /var
              format: xfs
        systemd:
          units:
            - name: var.mount 4
              enabled: true
              contents: |
                [Unit]
                Before=local-fs.target
                [Mount]
                What=/dev/disk/by-partlabel/var
                Where=/var
                Options=defaults,prjquota 5
                [Install]
                WantedBy=local-fs.target
    1
    The storage device name of the disk that you want to partition.
    2
    When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition.
    3
    The size of the data partition in mebibytes.
    4
    The name of the mount unit must match the directory specified in the Where= directive. For example, for a filesystem mounted on /var/lib/containers, the unit must be named var-lib-containers.mount.
    5
    The prjquota mount option must be enabled for filesystems used for container storage.
    Note

    When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name.

  4. Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories:

    $ openshift-install create ignition-configs --dir $HOME/clusterconfig
    $ ls $HOME/clusterconfig/
    auth  bootstrap.ign  master.ign  metadata.json  worker.ign

Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems.

1.6.15. Creating the cluster

To create the OpenShift Container Platform cluster, you wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program.

Prerequisites

  • Create the required infrastructure for the cluster.
  • You obtained the installation program and generated the Ignition config files for your cluster.
  • You used the Ignition config files to create RHCOS machines for your cluster.

Procedure

  1. Monitor the bootstrap process:

    $ ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1
        --log-level=info 2
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
    2
    To view different installation details, specify warn, debug, or error instead of info.

    Example output

    INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443...
    INFO API v1.19.0 up
    INFO Waiting up to 30m0s for bootstrapping to complete...
    INFO It is now safe to remove the bootstrap resources

    The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines.

  2. After bootstrap process is complete, remove the bootstrap machine from the load balancer.

    Important

    You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the machine itself.

1.6.16. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

1.6.17. Approving the certificate signing requests for your machines

When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.

Prerequisites

  • You added machines to your cluster.

Procedure

  1. Confirm that the cluster recognizes the machines:

    $ oc get nodes

    Example output

    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  63m  v1.19.0
    master-1  Ready     master  63m  v1.19.0
    master-2  Ready     master  64m  v1.19.0

    The output lists all of the machines that you created.

    Note

    The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.

  2. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster:

    $ oc get csr

    Example output

    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-8b2br   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    csr-8vnps   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    ...

    In this example, two machines are joining the cluster. You might see more approved CSRs in the list.

  3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines:

    Note

    Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters.

    Note

    For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec, oc rsh, and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node.

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> 1
      1
      <csr_name> is the name of a CSR from the list of current CSRs.
    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
      Note

      Some Operators might not become available until some CSRs are approved.

  4. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:

    $ oc get csr

    Example output

    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-bfd72   5m26s   system:node:ip-10-0-50-126.us-east-2.compute.internal                       Pending
    csr-c57lv   5m26s   system:node:ip-10-0-95-157.us-east-2.compute.internal                       Pending
    ...

  5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines:

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> 1
      1
      <csr_name> is the name of a CSR from the list of current CSRs.
    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
  6. After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command:

    $ oc get nodes

    Example output

    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  73m  v1.20.0
    master-1  Ready     master  73m  v1.20.0
    master-2  Ready     master  74m  v1.20.0
    worker-0  Ready     worker  11m  v1.20.0
    worker-1  Ready     worker  11m  v1.20.0

    Note

    It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status.

Additional information

1.6.18. Initial Operator configuration

After the control plane initializes, you must immediately configure some Operators so that they all become available.

Prerequisites

  • Your control plane has initialized.

Procedure

  1. Watch the cluster components come online:

    $ watch -n5 oc get clusteroperators

    Example output

    NAME                                       VERSION AVAILABLE   PROGRESSING   DEGRADED   SINCE
    authentication                             4.6.0   True        False         False      3h56m
    cloud-credential                           4.6.0   True        False         False      29h
    cluster-autoscaler                         4.6.0   True        False         False      29h
    config-operator                            4.6.0   True        False         False      6h39m
    console                                    4.6.0   True        False         False      3h59m
    csi-snapshot-controller                    4.6.0   True        False         False      4h12m
    dns                                        4.6.0   True        False         False      4h15m
    etcd                                       4.6.0   True        False         False      29h
    image-registry                             4.6.0   True        False         False      3h59m
    ingress                                    4.6.0   True        False         False      4h30m
    insights                                   4.6.0   True        False         False      29h
    kube-apiserver                             4.6.0   True        False         False      29h
    kube-controller-manager                    4.6.0   True        False         False      29h
    kube-scheduler                             4.6.0   True        False         False      29h
    kube-storage-version-migrator              4.6.0   True        False         False      4h2m
    machine-api                                4.6.0   True        False         False      29h
    machine-approver                           4.6.0   True        False         False      6h34m
    machine-config                             4.6.0   True        False         False      3h56m
    marketplace                                4.6.0   True        False         False      4h2m
    monitoring                                 4.6.0   True        False         False      6h31m
    network                                    4.6.0   True        False         False      29h
    node-tuning                                4.6.0   True        False         False      4h30m
    openshift-apiserver                        4.6.0   True        False         False      3h56m
    openshift-controller-manager               4.6.0   True        False         False      4h36m
    openshift-samples                          4.6.0   True        False         False      4h30m
    operator-lifecycle-manager                 4.6.0   True        False         False      29h
    operator-lifecycle-manager-catalog         4.6.0   True        False         False      29h
    operator-lifecycle-manager-packageserver   4.6.0   True        False         False      3h59m
    service-ca                                 4.6.0   True        False         False      29h
    storage                                    4.6.0   True        False         False      4h30m

  2. Configure the Operators that are not available.

1.6.18.1. Disabling the default OperatorHub sources

Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator.

Procedure

  • Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object:

    $ oc patch OperatorHub cluster --type json \
        -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
Tip

Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Global Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources.

1.6.18.2. Image registry storage configuration

The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available.

Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters.

Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades.

1.6.18.2.1. Configuring registry storage for VMware vSphere

As a cluster administrator, following installation you must configure your registry to use storage.

Prerequisites

  • Cluster administrator permissions.
  • A cluster on VMware vSphere.
  • Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage.

    Important

    OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required.

  • Must have "100Gi" capacity.
Important

Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended.

Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components.

Procedure

  1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource.

    Note

    When using shared storage, review your security settings to prevent outside access.

  2. Verify that you do not have a registry pod:

    $ oc get pod -n openshift-image-registry
    Note

    If the storage type is emptyDIR, the replica number cannot be greater than 1.

  3. Check the registry configuration:

    $ oc edit configs.imageregistry.operator.openshift.io

    Example output

    storage:
      pvc:
        claim: 1

    1
    Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC.
  4. Check the clusteroperator status:

    $ oc get clusteroperator image-registry
1.6.18.2.2. Configuring storage for the image registry in non-production clusters

You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry.

Procedure

  • To set the image registry storage to an empty directory:

    $ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'
    Warning

    Configure this option for only non-production clusters.

    If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error:

    Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found

    Wait a few minutes and run the command again.

1.6.18.2.3. Configuring block registry storage for VMware vSphere

To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy.

Important

Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica.

Procedure

  1. To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica:

    $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}'
  2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode.

    1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object:

      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: image-registry-storage 1
        namespace: openshift-image-registry 2
      spec:
        accessModes:
        - ReadWriteOnce 3
        resources:
          requests:
            storage: 100Gi 4
      1
      A unique name that represents the PersistentVolumeClaim object.
      2
      The namespace for the PersistentVolumeClaim object, which is openshift-image-registry.
      3
      The access mode of the persistent volume claim. With ReadWriteOnce, the volume can be mounted with read and write permissions by a single node.
      4
      The size of the persistent volume claim.
    2. Create the PersistentVolumeClaim object from the file:

      $ oc create -f pvc.yaml -n openshift-image-registry
  3. Edit the registry configuration so that it references the correct PVC:

    $ oc edit config.imageregistry.operator.openshift.io -o yaml

    Example output

    storage:
      pvc:
        claim: 1

    1
    Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC.

For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere.

1.6.19. Completing installation on user-provisioned infrastructure

After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide.

Prerequisites

  • Your control plane has initialized.
  • You have completed the initial Operator configuration.

Procedure

  1. Confirm that all the cluster components are online with the following command:

    $ watch -n5 oc get clusteroperators

    Example output

    NAME                                       VERSION AVAILABLE   PROGRESSING   DEGRADED   SINCE
    authentication                             4.6.0   True        False         False      3h56m
    cloud-credential                           4.6.0   True        False         False      29h
    cluster-autoscaler                         4.6.0   True        False         False      29h
    config-operator                            4.6.0   True        False         False      6h39m
    console                                    4.6.0   True        False         False      3h59m
    csi-snapshot-controller                    4.6.0   True        False         False      4h12m
    dns                                        4.6.0   True        False         False      4h15m
    etcd                                       4.6.0   True        False         False      29h
    image-registry                             4.6.0   True        False         False      3h59m
    ingress                                    4.6.0   True        False         False      4h30m
    insights                                   4.6.0   True        False         False      29h
    kube-apiserver                             4.6.0   True        False         False      29h
    kube-controller-manager                    4.6.0   True        False         False      29h
    kube-scheduler                             4.6.0   True        False         False      29h
    kube-storage-version-migrator              4.6.0   True        False         False      4h2m
    machine-api                                4.6.0   True        False         False      29h
    machine-approver                           4.6.0   True        False         False      6h34m
    machine-config                             4.6.0   True        False         False      3h56m
    marketplace                                4.6.0   True        False         False      4h2m
    monitoring                                 4.6.0   True        False         False      6h31m
    network                                    4.6.0   True        False         False      29h
    node-tuning                                4.6.0   True        False         False      4h30m
    openshift-apiserver                        4.6.0   True        False         False      3h56m
    openshift-controller-manager               4.6.0   True        False         False      4h36m
    openshift-samples                          4.6.0   True        False         False      4h30m
    operator-lifecycle-manager                 4.6.0   True        False         False      29h
    operator-lifecycle-manager-catalog         4.6.0   True        False         False      29h
    operator-lifecycle-manager-packageserver   4.6.0   True        False         False      3h59m
    service-ca                                 4.6.0   True        False         False      29h
    storage                                    4.6.0   True        False         False      4h30m

    Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials:

    $ ./openshift-install --dir <installation_directory> wait-for install-complete 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.

    Example output

    INFO Waiting up to 30m0s for the cluster to initialize...

    The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server.

    Important
    • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
    • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
  2. Confirm that the Kubernetes API server is communicating with the pods.

    1. To view a list of all pods, use the following command:

      $ oc get pods --all-namespaces

      Example output

      NAMESPACE                         NAME                                            READY   STATUS      RESTARTS   AGE
      openshift-apiserver-operator      openshift-apiserver-operator-85cb746d55-zqhs8   1/1     Running     1          9m
      openshift-apiserver               apiserver-67b9g                                 1/1     Running     0          3m
      openshift-apiserver               apiserver-ljcmx                                 1/1     Running     0          1m
      openshift-apiserver               apiserver-z25h4                                 1/1     Running     0          2m
      openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8        1/1     Running     0          5m
      ...

    2. View the logs for a pod that is listed in the output of the previous command by using the following command:

      $ oc logs <pod_name> -n <namespace> 1
      1
      Specify the pod name and namespace, as shown in the output of the previous command.

      If the pod logs display, the Kubernetes API server can communicate with the cluster machines.

  3. Register your cluster on the Cluster registration page.

You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere.

1.6.20. Backing up VMware vSphere volumes

OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information.

Procedure

To create a backup of persistent volumes:

  1. Stop the application that is using the persistent volume.
  2. Clone the persistent volume.
  3. Restart the application.
  4. Create a backup of the cloned volume.
  5. Delete the cloned volume.

1.6.21. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.6, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

1.6.22. Next steps

1.7. Uninstalling a cluster on vSphere that uses installer-provisioned infrastructure

You can remove a cluster that you deployed in your VMware vSphere instance by using installer-provisioned infrastructure.

Note

When you run the openshift-install destroy cluster command to uninstall OpenShift Container Platform, vSphere volumes are not automatically deleted. The cluster administrator must manually find the vSphere volumes and delete them.

1.7.1. Removing a cluster that uses installer-provisioned infrastructure

You can remove a cluster that uses installer-provisioned infrastructure from your cloud.

Note

After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access.

Prerequisites

  • Have a copy of the installation program that you used to deploy the cluster.
  • Have the files that the installation program generated when you created your cluster.

Procedure

  1. From the directory that contains the installation program on the computer that you used to install the cluster, run the following command:

    $ ./openshift-install destroy cluster \
    --dir <installation_directory> --log-level info 1 2
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
    2
    To view different details, specify warn, debug, or error instead of info.
    Note

    You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster.

  2. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.