Installing on IBM PowerVC


OpenShift Container Platform 4.21

Installing OpenShift Container Platform on IBM PowerVC

Red Hat OpenShift Documentation Team

Abstract

This document describes how to install OpenShift Container Platform on IBM PowerVC.

Chapter 1. Preparing to install on IBM PowerVC

You can install OpenShift Container Platform on IBM® Power® Virtualization Center (IBM PowerVC) using installer-provisioned infrastructure.

1.1. Prerequisites

You can install a cluster on IBM PowerVC infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods:

Installing a cluster on IBM PowerVC with customizations: You can install a customized cluster on IBM PowerVC. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation.

In OpenShift Container Platform version 4.21, you can install a customized cluster on IBM PowerVC. To customize the installation, modify parameters in the install-config.yaml before you install the cluster.

2.1. Prerequisites

To support an OpenShift Container Platform installation, it is recommended that your IBM PowerVC has room for the following resources available:

Expand
Table 2.1. Recommended resources for a default OpenShift Container Platform cluster on IBM PowerVC
ResourceValue

Subnets

1

RAM

88 GB

vCPUs

22

Volume storage

275 GB

Instances

7

A cluster might function with fewer than recommended resources.

2.3. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on the host you are using for installation.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space.

Procedure

  1. Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account.

    Tip
  2. Select your infrastructure provider from the Run it yourself section of the page.
  3. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer.
  4. Place the downloaded file in the directory where you want to store the installation configuration files.

    Important
    • The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster.
    • Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
  5. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar -xvf openshift-install-linux.tar.gz
    Copy to Clipboard Toggle word wrap
  6. Download your installation pull secret from Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

    Tip

    Alternatively, you can retrieve the installation program from the Red Hat Customer Portal, where you can specify a version of the installation program to download. However, you must have an active subscription to access this page.

2.4. Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on

IBM PowerVC.

Prerequisites

  • You have the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory>
      Copy to Clipboard Toggle word wrap
      • <installation_directory>: For <installation_directory>, specify the directory name to store the files that the installation program creates.

        When specifying the directory:

      • Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory.
      • Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        Note

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Enter a descriptive name for your cluster.
  2. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section.
  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

2.5. Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

Important

You can run the create cluster command of the installation program only once, during initial installation.

Prerequisites

  • You have configured an account with the cloud platform that hosts your cluster.
  • You have the OpenShift Container Platform installation program and the pull secret for your cluster.
  • You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.

Procedure

  • In the directory that contains the installation program, initialize the cluster deployment by running the following command:

    $ ./openshift-install create cluster --dir <installation_directory> \ 
    1
    
        --log-level=info 
    2
    Copy to Clipboard Toggle word wrap
    1
    For <installation_directory>, specify the location of your customized ./install-config.yaml file.
    2
    To view different installation details, specify warn, debug, or error instead of info.

Verification

When the cluster deployment completes successfully:

  • The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user.
  • Credential information also outputs to <installation_directory>/.openshift_install.log.
Important

Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

Example output

...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
Copy to Clipboard Toggle word wrap

Important
  • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

2.6. Installing the OpenShift CLI on Linux

To manage your cluster and deploy applications from the command line, install the OpenShift CLI (oc) binary on Linux.

Important

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform.

Download and install the new version of oc.

Procedure

  1. Navigate to the Download OpenShift Container Platform page on the Red Hat Customer Portal.
  2. Select the architecture from the Product Variant list.
  3. Select the appropriate version from the Version list.
  4. Click Download Now next to the OpenShift v4.21 Linux Clients entry and save the file.
  5. Unpack the archive:

    $ tar xvf <file>
    Copy to Clipboard Toggle word wrap
  6. Place the oc binary in a directory that is on your PATH.

    To check your PATH, execute the following command:

    $ echo $PATH
    Copy to Clipboard Toggle word wrap

Verification

  • After you install the OpenShift CLI, it is available using the oc command:

    $ oc <command>
    Copy to Clipboard Toggle word wrap

2.7. Installing the OpenShift CLI on Windows

To manage your cluster and deploy applications from the command line, install OpenShift CLI (oc) binary on Windows.

Important

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform.

Download and install the new version of oc.

Procedure

  1. Navigate to the Download OpenShift Container Platform page on the Red Hat Customer Portal.
  2. Select the appropriate version from the Version list.
  3. Click Download Now next to the OpenShift v4.21 Windows Client entry and save the file.
  4. Extract the archive with a ZIP program.
  5. Move the oc binary to a directory that is on your PATH variable.

    To check your PATH variable, open the command prompt and execute the following command:

    C:\> path
    Copy to Clipboard Toggle word wrap

Verification

  • After you install the OpenShift CLI, it is available using the oc command:

    C:\> oc <command>
    Copy to Clipboard Toggle word wrap

2.8. Installing the OpenShift CLI on macOS

To manage your cluster and deploy applications from the command line, install the OpenShift CLI (oc) binary on macOS.

Important

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform.

Download and install the new version of oc.

Procedure

  1. Navigate to the Download OpenShift Container Platform page on the Red Hat Customer Portal.
  2. Select the architecture from the Product Variant list.
  3. Select the appropriate version from the Version list.
  4. Click Download Now next to the OpenShift v4.21 macOS Clients entry and save the file.

    Note

    For macOS arm64, choose the OpenShift v4.21 macOS arm64 Client entry.

  5. Unpack and unzip the archive.
  6. Move the oc binary to a directory on your PATH variable.

    To check your PATH variable, open a terminal and execute the following command:

    $ echo $PATH
    Copy to Clipboard Toggle word wrap

Verification

  • Verify your installation by using an oc command:

    $ oc <command>
    Copy to Clipboard Toggle word wrap

2.9. Verifying cluster status

You can verify your OpenShift Container Platform cluster’s status during or after installation.

Procedure

  1. In the cluster environment, export the administrator’s kubeconfig file:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 
    1
    Copy to Clipboard Toggle word wrap
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.

    The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.

  2. View the control plane and compute machines created after a deployment:

    $ oc get nodes
    Copy to Clipboard Toggle word wrap
  3. View your cluster’s version:

    $ oc get clusterversion
    Copy to Clipboard Toggle word wrap
  4. View your Operators' status:

    $ oc get clusteroperator
    Copy to Clipboard Toggle word wrap
  5. View all running pods in the cluster:

    $ oc get pods -A
    Copy to Clipboard Toggle word wrap

2.10. Logging in to the cluster by using the CLI

To log in to your cluster as the default system user, export the kubeconfig file. This configuration enables the CLI to authenticate and connect to the specific API server created during OpenShift Container Platform installation.

The kubeconfig file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the OpenShift CLI (oc).

Procedure

  1. Export the kubeadmin credentials by running the following command:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig
    Copy to Clipboard Toggle word wrap

    where:

    <installation_directory>
    Specifies the path to the directory that stores the installation files.
  2. Verify you can run oc commands successfully using the exported configuration by running the following command:

    $ oc whoami
    Copy to Clipboard Toggle word wrap

    Example output

    system:admin
    Copy to Clipboard Toggle word wrap

To provide metrics about cluster health and the success of updates, the Telemetry service requires internet access. When connected, this service runs automatically by default and registers your cluster to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager,use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. For more information about subscription watch, see "Data Gathered and Used by Red Hat’s subscription services" in the Additional resources section.

Before you deploy an OpenShift Container Platform cluster on IBM® Power® Virtualization Center (IBM PowerVC), you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further.

The following tables specify the required, optional, and IBM PowerVC-specific installation configuration parameters that you can set as part of the installation process.

Important

After installation, you cannot change these parameters in the install-config.yaml file.

3.1.1. Required configuration parameters

Required installation configuration parameters are described in the following table:

Expand
Table 3.1. Required parameters
ParameterDescription
apiVersion:
Copy to Clipboard Toggle word wrap

The API version for the install-config.yaml content. The current version is v1. The installation program might also support older API versions.

Value: String

baseDomain:
Copy to Clipboard Toggle word wrap

The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

Value: A fully-qualified domain or subdomain name, such as example.com.

metadata:
Copy to Clipboard Toggle word wrap

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Value: Object

metadata:
  name:
Copy to Clipboard Toggle word wrap

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

Value: String of lowercase letters, hyphens (-), and periods (.), such as dev.

platform:
Copy to Clipboard Toggle word wrap

The configuration for the specific platform upon which to perform the installation: aws, baremetal, azure, gcp, ibmcloud, nutanix, openstack, powervs, vsphere, or {}. For additional information about platform.<platform> parameters, consult the table for your specific platform that follows.

Value: Object

pullSecret:
Copy to Clipboard Toggle word wrap

Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io.

Value:

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}
Copy to Clipboard Toggle word wrap

Additional configuration parameters are described in the following table:

Expand
Table 3.2. Additional parameters
ParameterDescription
platform:
  powervc:
    cloud:
Copy to Clipboard Toggle word wrap

The name of the cloud to use from the list of clouds in the clouds.yaml file.

In the cloud configuration in the clouds.yaml file, if possible, use application credentials rather than a user name and password combination. Using application credentials avoids disruptions from secret propogation that follow user name and password rotation.

Value: String, for example MyCloud.

3.1.3. Optional configuration parameters

Optional configuration parameters are described in the following table:

Expand
Table 3.3. Optional parameters
ParameterDescription
compute:
  platform:
    powervc:
      zones:
Copy to Clipboard Toggle word wrap

Compute availability zones to install machines on. If this parameter is not set, the installation program relies on the default settings that the administrator configured.

Value: A list of strings. For example, ["zone-1", "zone-2"].

controlPlane:
  platform:
    powervc:
      zones:
Copy to Clipboard Toggle word wrap

Compute availability zones to install machines on. If this parameter is not set, the installation program relies on the default settings that the administrator configured.

Value: A list of strings. For example, ["zone-1", "zone-2"].

platform:
  powervc:
    clusterOSImage:
Copy to Clipboard Toggle word wrap

The name of the existing image.

Value: the name of an existing image, for example my-rhcos.

platform:
  powervc:
    controlPlanePort:
      fixedIPs:
Copy to Clipboard Toggle word wrap

Subnets for the machines to use.

Value: A list of subnet names or UUIDs to use in cluster installation.

platform:
  powervc:
    controlPlanePort:
      network:
Copy to Clipboard Toggle word wrap

A network for the machines to use.

Value: The UUID or name of an network to use in cluster installation.

platform:
  powervc:
    defaultMachinePlatform:
Copy to Clipboard Toggle word wrap

The default machine pool platform configuration.

Value:

{
   "type": "my-compute-template",
}
Copy to Clipboard Toggle word wrap
platform:
  powervc:
    externalDNS:
Copy to Clipboard Toggle word wrap

IP addresses for external DNS servers that cluster instances use for DNS resolution.

Value: A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"].

platform:
  powervc:
    loadbalancer:
Copy to Clipboard Toggle word wrap

Whether or not to use the default, internal load balancer. If the value is set to UserManaged, this default load balancer is disabled so that you can deploy a cluster that uses an external, user-managed load balancer. If the parameter is not set, or if the value is OpenShiftManagedDefault, the cluster uses the default load balancer.

Value: UserManaged or OpenShiftManagedDefault.

platform:
  powervc:
    apiVIPs:
Copy to Clipboard Toggle word wrap

Virtual IP (VIP) addresses that you configured for control plane API access.

Value: A list of IP addresses as strings. For example, ["10.0.0.30", "10.0.0.31"]

platform:
  powervc:
    ingressVIPs:
Copy to Clipboard Toggle word wrap

Virtual IP (VIP) addresses that you configured for cluster ingress.

Value: A list of IP addresses as strings. For example, ["10.0.0.32", "10.0.0.33"]

3.1.4. Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or configure different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Expand
Table 3.4. Network parameters
ParameterDescription
networking:
Copy to Clipboard Toggle word wrap

The configuration for the cluster network.

Value: Object

Note

You cannot change parameters specified by the networking object after installation.

networking:
  networkType:
Copy to Clipboard Toggle word wrap

The Red Hat OpenShift Networking network plugin to install.

Value:OVNKubernetes. OVNKubernetes is a Container Network Interface (CNI) plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes.

networking:
  clusterNetwork:
Copy to Clipboard Toggle word wrap

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

Value: An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
Copy to Clipboard Toggle word wrap
networking:
  clusterNetwork:
    cidr:
Copy to Clipboard Toggle word wrap

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

Value: An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking:
  clusterNetwork:
    hostPrefix:
Copy to Clipboard Toggle word wrap

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

Value: A subnet prefix.

The default value is 23.

networking:
  serviceNetwork:
Copy to Clipboard Toggle word wrap

The IP address block for services. The default value is 172.30.0.0/16.

The OVN-Kubernetes network plugins supports only a single IP address block for the service network.

Value: An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16
Copy to Clipboard Toggle word wrap
networking:
  machineNetwork:
Copy to Clipboard Toggle word wrap

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

Value: An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16
Copy to Clipboard Toggle word wrap
networking:
  machineNetwork:
    cidr:
Copy to Clipboard Toggle word wrap

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power® Virtual Server. For libvirt, the default value is 192.168.126.0/24. For IBM Power® Virtual Server, the default value is 192.168.0.0/24.

Value: An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Note

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

networking:
  ovnKubernetesConfig:
    ipv4:
      internalJoinSubnet:
Copy to Clipboard Toggle word wrap

Configures the IPv4 join subnet that is used internally by ovn-kubernetes. This subnet must not overlap with any other subnet that OpenShift Container Platform is using, including the node network. The size of the subnet must be larger than the number of nodes. You cannot change the value after installation.

Value: An IP network block in CIDR notation. The default value is 100.64.0.0/16.

3.1.5. Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Expand
Table 3.5. Optional parameters
ParameterDescription
additionalTrustBundle:
Copy to Clipboard Toggle word wrap

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle might also be used when a proxy has been configured.

Value: String

capabilities:
Copy to Clipboard Toggle word wrap

Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing.

Value: String array

capabilities:
  baselineCapabilitySet:
Copy to Clipboard Toggle word wrap

Selects an initial set of optional capabilities to enable. Valid values are None, v4.11, v4.12 and vCurrent. The default value is vCurrent.

Value: String

capabilities:
  additionalEnabledCapabilities:
Copy to Clipboard Toggle word wrap

Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet. You can specify multiple capabilities in this parameter.

Value: String array

cpuPartitioningMode:
Copy to Clipboard Toggle word wrap

Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. You can only enable workload partitioning during installation. You cannot disable it after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section.

Value: None or AllNodes. None is the default value.

compute:
Copy to Clipboard Toggle word wrap

The configuration for the machines that comprise the compute nodes.

Value: Array of MachinePool objects.

compute:
  architecture:
Copy to Clipboard Toggle word wrap

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

Value: String

compute:
  hyperthreading:
Copy to Clipboard Toggle word wrap

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Value: Enabled or Disabled

compute:
  name:
Copy to Clipboard Toggle word wrap

Required if you use compute. The name of the machine pool.

Value: worker

compute:
  platform:
Copy to Clipboard Toggle word wrap

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

Value:aws, azure, gcp, ibmcloud, nutanix, openstack, powervs, vsphere, or {}

compute:
  replicas:
Copy to Clipboard Toggle word wrap

The number of compute machines, which are also known as worker machines, to provision.

Value: A positive integer greater than or equal to 2. The default value is 3.

featureSet:
Copy to Clipboard Toggle word wrap

Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates".

Value: String. The name of the feature set to enable, such as TechPreviewNoUpgrade.

controlPlane:
Copy to Clipboard Toggle word wrap

The configuration for the machines that form the control plane.

Value: Array of MachinePool objects.

controlPlane:
  architecture:
Copy to Clipboard Toggle word wrap

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

Value: String

controlPlane:
  architecture:
Copy to Clipboard Toggle word wrap

Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. The valid value is the default: ppc64le.

Value: String

controlPlane:
  hyperthreading:
Copy to Clipboard Toggle word wrap

Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Value: Enabled or Disabled

controlPlane:
  name:
Copy to Clipboard Toggle word wrap

Required if you use controlPlane. The name of the machine pool.

Value: master

controlPlane:
  platform:
Copy to Clipboard Toggle word wrap

Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

Value:aws, azure, gcp, ibmcloud, nutanix, openstack, powervs, vsphere, or {}

controlPlane:
  replicas:
Copy to Clipboard Toggle word wrap

The number of control plane machines to provision.

Value: Supported values are 3, or 1 when deploying single-node OpenShift.

arbiter:
    name: arbiter
Copy to Clipboard Toggle word wrap

The OpenShift Container Platform cluster requires a name for arbiter nodes. For example, arbiter.

arbiter:
    replicas: 1
Copy to Clipboard Toggle word wrap

The replicas parameter sets the number of arbiter nodes for the OpenShift Container Platform cluster. You cannot set this field to a value that is greater than 1.

credentialsMode:
Copy to Clipboard Toggle word wrap

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

Note

Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content.

Value: Mint, Passthrough, Manual or an empty string ("").

fips:
Copy to Clipboard Toggle word wrap

Enable or disable FIPS mode. The default is false (disabled). If you enable FIPS mode, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that RHCOS provides instead.

Important

To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode.

When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures.

Important

If you are using Azure File storage, you cannot enable FIPS mode.

Value: false or true

endpoint:
  name: <endpoint_name>
  clusterUseOnly: `true` or `false`
Copy to Clipboard Toggle word wrap

The name parameter contains the name of the Private Service Connect (PSC) endpoints.

Important

When clusterUseOnly is false, its default setting, you must run the installation program from a bastion host that is within the same VPC where you want to deploy the cluster.

When you want the installation program to use the public API endpoints and cluster operators to use the API endpoint overrides, set clusterUseOnly to true. When you want both the installation program and the cluster operators to use the API endpoint overrides, for example if you are running the installation program from a bastion host that is within the same VPC where you want to deploy the cluster, set clusterUseOnly to false . The parameter is optional and defaults to false.

Value: String or boolean

imageContentSources:
Copy to Clipboard Toggle word wrap

Sources and repositories for the release-image content.

Value: Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

imageContentSources:
  source:
Copy to Clipboard Toggle word wrap

Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

Value: String

imageContentSources:
  mirrors:
Copy to Clipboard Toggle word wrap

Specify one or more repositories that might also contain the same images.

Value: Array of strings

publish:
Copy to Clipboard Toggle word wrap

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Value:Internal or External. The default value is External.

Setting this field to Internal is not supported on non-cloud platforms.

Important

If the value of the field is set to Internal, the cluster becomes non-functional. For more information, refer to BZ#1953035.

sshKey:
Copy to Clipboard Toggle word wrap

The SSH key to authenticate access to your cluster machines.

Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

Value: For example, sshKey: ssh-ed25519 AAAA...

Chapter 4. Uninstalling a cluster on IBM PowerVC

You can remove a cluster that you deployed to IBM PowerVC.

You can remove a cluster that uses installer-provisioned infrastructure that you provisioned from your cloud platform.

Note

After uninstallation, check your cloud provider for any resources that were not removed properly, especially with user-provisioned infrastructure clusters. Some resources might exist because either the installation program did not create the resource or could not access the resource.

Prerequisites

  • You have a copy of the installation program that you used to deploy the cluster.
  • You have the files that the installation program generated when you created your cluster.

Procedure

  1. From the directory that has the installation program on the computer that you used to install the cluster, run the following command:

    $ ./openshift-install destroy cluster \
    --dir <installation_directory> --log-level info
    Copy to Clipboard Toggle word wrap

    where:

    <installation_directory>
    Specify the path to the directory that you stored the installation files in.
    --log-level info

    To view different details, specify warn, debug, or error instead of info.

    Note

    You must specify the directory that includes the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster.

  2. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.

Legal Notice

Copyright © 2025 Red Hat

OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).

Modified versions must remove all Red Hat trademarks.

Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.

Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.

Linux® is the registered trademark of Linus Torvalds in the United States and other countries.

Java® is a registered trademark of Oracle and/or its affiliates.

XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.

MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.

Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.

The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top