Chapter 2. Preparing to update a cluster


2.1. Preparing to update to OpenShift Container Platform 4.19

Learn more about administrative tasks that cluster admins must perform to successfully initialize an update, as well as optional guidelines for ensuring a successful update.

2.1.1. Kubernetes API removals

OpenShift Container Platform 4.19 uses Kubernetes 1.32, which removed several deprecated Kubernetes APIs.

A cluster administrator must provide a manual acknowledgment before the cluster can be updated from OpenShift Container Platform 4.18 to 4.19. This is to help prevent issues after upgrading to OpenShift Container Platform 4.19, where APIs that have been removed are still in use by workloads, tools, or other components running on or interacting with the cluster. Administrators must evaluate their cluster for any APIs in use that will be removed and migrate the affected components to use the appropriate new API version. After this evaluation and migration is complete, the administrator can provide the acknowledgment.

Before you can update your OpenShift Container Platform 4.18 cluster to 4.19, you must provide the administrator acknowledgment.

2.1.1.1. Removed Kubernetes APIs

OpenShift Container Platform 4.19 uses Kubernetes 1.32, which removed the following deprecated APIs. You must migrate manifests and API clients to use the appropriate API version. For more information about migrating removed APIs, see the Kubernetes documentation.

Table 2.1. APIs removed from Kubernetes 1.32
ResourceRemoved APIMigrate toNotable changes

FlowSchema

flowcontrol.apiserver.k8s.io/v1beta3

flowcontrol.apiserver.k8s.io/v1

No

PriorityLevelConfiguration

flowcontrol.apiserver.k8s.io/v1beta3

flowcontrol.apiserver.k8s.io/v1

Yes

2.1.1.2. Evaluating your cluster for removed APIs

There are several methods to help administrators identify where APIs that will be removed are in use. However, OpenShift Container Platform cannot identify all instances, especially workloads that are idle or external tools that are used. It is the responsibility of the administrator to properly evaluate all workloads and other integrations for instances of removed APIs.

2.1.1.2.1. Reviewing alerts to identify uses of removed APIs

Two alerts fire when an API is in use that will be removed in the next release:

  • APIRemovedInNextReleaseInUse - for APIs that will be removed in the next OpenShift Container Platform release.
  • APIRemovedInNextEUSReleaseInUse - for APIs that will be removed in the next OpenShift Container Platform Extended Update Support (EUS) release.

If either of these alerts are firing in your cluster, review the alerts and take action to clear the alerts by migrating manifests and API clients to use the new API version.

Use the APIRequestCount API to get more information about which APIs are in use and which workloads are using removed APIs, because the alerts do not provide this information. Additionally, some APIs might not trigger these alerts but are still captured by APIRequestCount. The alerts are tuned to be less sensitive to avoid alerting fatigue in production systems.

2.1.1.2.2. Using APIRequestCount to identify uses of removed APIs

You can use the APIRequestCount API to track API requests and review whether any of them are using one of the removed APIs.

Prerequisites

  • You must have access to the cluster as a user with the cluster-admin role.

Procedure

  • Run the following command and examine the REMOVEDINRELEASE column of the output to identify the removed APIs that are currently in use:

    $ oc get apirequestcounts
    Copy to Clipboard

    Example output

    NAME                                                                 REMOVEDINRELEASE   REQUESTSINCURRENTHOUR   REQUESTSINLAST24H
    ...
    flowschemas.v1beta3.flowcontrol.apiserver.k8s.io                     1.32               0                       3
    ...
    prioritylevelconfigurations.v1beta3.flowcontrol.apiserver.k8s.io     1.32               0                       1
    ...
    Copy to Clipboard

    Important

    You can safely ignore the following entries that appear in the results:

    • The system:serviceaccount:kube-system:generic-garbage-collector and the system:serviceaccount:kube-system:namespace-controller users might appear in the results because these services invoke all registered APIs when searching for resources to remove.
    • The system:kube-controller-manager and system:cluster-policy-controller users might appear in the results because they walk through all resources while enforcing various policies.

    You can also use -o jsonpath to filter the results:

    $ oc get apirequestcounts -o jsonpath='{range .items[?(@.status.removedInRelease!="")]}{.status.removedInRelease}{"\t"}{.metadata.name}{"\n"}{end}'
    Copy to Clipboard

    Example output

    1.32	flowschemas.v1beta3.flowcontrol.apiserver.k8s.io
    1.32	prioritylevelconfigurations.v1beta3.flowcontrol.apiserver.k8s.io
    Copy to Clipboard

2.1.1.2.3. Using APIRequestCount to identify which workloads are using the removed APIs

You can examine the APIRequestCount resource for a given API version to help identify which workloads are using the API.

Prerequisites

  • You must have access to the cluster as a user with the cluster-admin role.

Procedure

  • Run the following command and examine the username and userAgent fields to help identify the workloads that are using the API:

    $ oc get apirequestcounts <resource>.<version>.<group> -o yaml
    Copy to Clipboard

    For example:

    $ oc get apirequestcounts flowschemas.v1beta3.flowcontrol.apiserver.k8s.io -o yaml
    Copy to Clipboard

    You can also use -o jsonpath to extract the username and userAgent values from an APIRequestCount resource:

    $ oc get apirequestcounts flowschemas.v1beta3.flowcontrol.apiserver.k8s.io \
      -o jsonpath='{range .status.currentHour..byUser[*]}{..byVerb[*].verb}{","}{.username}{","}{.userAgent}{"\n"}{end}' \
      | sort -k 2 -t, -u | column -t -s, -NVERBS,USERNAME,USERAGENT
    Copy to Clipboard

    Example output

    VERBS     USERNAME                            USERAGENT
    create    system:admin                        oc/4.13.0 (linux/amd64)
    list get  system:serviceaccount:myns:default  oc/4.16.0 (linux/amd64)
    watch     system:serviceaccount:myns:webhook  webhook/v1.0.0 (linux/amd64)
    Copy to Clipboard

2.1.1.3. Migrating instances of removed APIs

For information about how to migrate removed Kubernetes APIs, see the Deprecated API Migration Guide in the Kubernetes documentation.

2.1.1.4. Providing the administrator acknowledgment

After you have evaluated your cluster for any removed APIs and have migrated any removed APIs, you can acknowledge that your cluster is ready to upgrade from OpenShift Container Platform 4.18 to 4.19.

Warning

Be aware that all responsibility falls on the administrator to ensure that all uses of removed APIs have been resolved and migrated as necessary before providing this administrator acknowledgment. OpenShift Container Platform can assist with the evaluation, but cannot identify all possible uses of removed APIs, especially idle workloads or external tools.

Prerequisites

  • You must have access to the cluster as a user with the cluster-admin role.

Procedure

  • Run the following command to acknowledge that you have completed the evaluation and your cluster is ready for the Kubernetes API removals in OpenShift Container Platform 4.19:

    $ oc -n openshift-config patch cm admin-acks --patch '{"data":{"ack-4.18-kube-1.32-api-removals-in-4.19":"true"}}' --type=merge
    Copy to Clipboard

2.1.2. Assessing the risk of conditional updates

A conditional update is an update target that is available but not recommended due to a known risk that applies to your cluster. The Cluster Version Operator (CVO) periodically queries the OpenShift Update Service (OSUS) for the most recent data about update recommendations, and some potential update targets might have risks associated with them.

The CVO evaluates the conditional risks, and if the risks are not applicable to the cluster, then the target version is available as a recommended update path for the cluster. If the risk is determined to be applicable, or if for some reason CVO cannot evaluate the risk, then the update target is available to the cluster as a conditional update.

When you encounter a conditional update while you are trying to update to a target version, you must assess the risk of updating your cluster to that version. Generally, if you do not have a specific need to update to that target version, it is best to wait for a recommended update path from Red Hat.

However, if you have a strong reason to update to that version, for example, if you need to fix an important CVE, then the benefit of fixing the CVE might outweigh the risk of the update being problematic for your cluster. You can complete the following tasks to determine whether you agree with the Red Hat assessment of the update risk:

  • Complete extensive testing in a non-production environment to the extent that you are comfortable completing the update in your production environment.
  • Follow the links provided in the conditional update description, investigate the bug, and determine if it is likely to cause issues for your cluster. If you need help understanding the risk, contact Red Hat Support.

2.1.3. etcd backups before cluster updates

etcd backups record the state of your cluster and all of its resource objects. You can use backups to attempt restoring the state of a cluster in disaster scenarios where you cannot recover a cluster in its currently dysfunctional state.

In the context of updates, you can attempt an etcd restoration of the cluster if an update introduced catastrophic conditions that cannot be fixed without reverting to the previous cluster version. etcd restorations might be destructive and destabilizing to a running cluster, use them only as a last resort.

Warning

Due to their high consequences, etcd restorations are not intended to be used as a rollback solution. Rolling your cluster back to a previous version is not supported. If your update is failing to complete, contact Red Hat support.

There are several factors that affect the viability of an etcd restoration. For more information, see "Backing up etcd data" and "Restoring to a previous cluster state".

2.1.4. Preparing for Gateway API management succession by the Ingress Operator

Starting in OpenShift Container Platform 4.19, the Ingress Operator manages the lifecycle of any Gateway API custom resource definitions (CRDs). This means that you will be denied access to creating, updating, and deleting any CRDs within the API groups that are grouped under Gateway API.

Updating from a version before 4.19 of OpenShift Container Platform where this management was not present requires you to replace or remove any Gateway API CRDs that already exist in the cluster so that they conform to the specific OpenShift Container Platform specification required by the Ingress Operator. OpenShift Container Platform version 4.19 requires Gateway API Standard version 1.2.1 CRDs.

Warning

Updating or deleting Gateway API resources can result in downtime and loss of service or data. Be sure you understand how this will affect your cluster before performing the steps in this procedure. If necessary, back up any Gateway API objects in YAML format in order to restore it later.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have access to an OpenShift Container Platform account with cluster administrator access.
  • Optional: You have backed up any necessary Gateway API objects.

    Warning

    Backup and restore can fail or result in data loss for any CRD fields that were present in the old definitions but are absent in the new definitions.

Procedure

  1. List all the Gateway API CRDs that you need to remove by running the following command:

    $ oc get crd | grep -F -e gateway.networking.k8s.io -e gateway.networking.x-k8s.io
    Copy to Clipboard

    Example output

    gatewayclasses.gateway.networking.k8s.io
    gateways.gateway.networking.k8s.io
    grpcroutes.gateway.networking.k8s.io
    httproutes.gateway.networking.k8s.io
    referencegrants.gateway.networking.k8s.io
    Copy to Clipboard

  2. Delete the Gateway API CRDs from the previous step by running the following command:

    $ oc delete crd gatewayclasses.networking.k8s.io && \
    oc delete crd gateways.networking.k8s.io && \
    oc delete crd grpcroutes.gateway.networking.k8s.io && \
    oc delete crd httproutes.gateway.networking.k8s.io && \
    oc delete crd referencesgrants.gateway.networking.k8s.io
    Copy to Clipboard
    Important

    Deleting CRDs removes every custom resource that relies on them and can result in data loss. Back up any necessary data before deleting the Gateway API CRDs. Any controller that was previously managing the lifecycle of the Gateway API CRDs will fail to operate properly. Attempting to force its use in conjunction with the Ingress Operator to manage Gateway API CRDs might prevent the cluster update from succeeding.

  3. Get the supported Gateway API CRDs by running the following command:

    $ oc apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml
    Copy to Clipboard
    Warning

    You can perform this step without deleting your CRDs. If your update to a CRD removes a field that is used by a custom resource, you can lose data. Updating a CRD a second time, to a version that re-adds a field, can cause any previously deleted data to reappear. Any third-party controller that depends on a specific Gateway API CRD version that is not supported in OpenShift Container Platform 4.19 will break upon updating that CRD to one supported by Red Hat.

    For more information on the OpenShift Container Platform implementation and the dead fields issue, see Gateway API implementation for OpenShift Container Platform.

2.1.5. Best practices for cluster updates

OpenShift Container Platform provides a robust update experience that minimizes workload disruptions during an update. Updates will not begin unless the cluster is in an upgradeable state at the time of the update request.

This design enforces some key conditions before initiating an update, but there are a number of actions you can take to increase your chances of a successful cluster update.

2.1.5.2. Address all critical alerts on the cluster

Critical alerts must always be addressed as soon as possible, but it is especially important to address these alerts and resolve any problems before initiating a cluster update. Failing to address critical alerts before beginning an update can cause problematic conditions for the cluster.

In the Administrator perspective of the web console, navigate to Observe Alerting to find critical alerts.

2.1.5.3. Ensure that the cluster is in an Upgradable state

When one or more Operators have not reported their Upgradeable condition as True for more than an hour, the ClusterNotUpgradeable warning alert is triggered in the cluster. In most cases this alert does not block patch updates, but you cannot perform a minor version update until you resolve this alert and all Operators report Upgradeable as True.

For more information about the Upgradeable condition, see "Understanding cluster Operator condition types" in the additional resources section.

2.1.5.3.1. SDN support removal

OpenShift SDN network plugin was deprecated in versions 4.15 and 4.16. With this release, the SDN network plugin is no longer supported and the content has been removed from the documentation.

If your OpenShift Container Platform cluster is still using the OpenShift SDN CNI, see Migrating from the OpenShift SDN network plugin.

Important

It is not possible to update a cluster to OpenShift Container Platform 4.17 if it is using the OpenShift SDN network plugin. You must migrate to the OVN-Kubernetes plugin before upgrading to OpenShift Container Platform 4.17.

2.1.5.4. Ensure that enough spare nodes are available

A cluster should not be running with little to no spare node capacity, especially when initiating a cluster update. Nodes that are not running and available may limit a cluster’s ability to perform an update with minimal disruption to cluster workloads.

Depending on the configured value of the cluster’s maxUnavailable spec, the cluster might not be able to apply machine configuration changes to nodes if there is an unavailable node. Additionally, if compute nodes do not have enough spare capacity, workloads might not be able to temporarily shift to another node while the first node is taken offline for an update.

Make sure that you have enough available nodes in each worker pool, as well as enough spare capacity on your compute nodes, to increase the chance of successful node updates.

Warning

The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool.

2.1.5.5. Ensure that the cluster’s PodDisruptionBudget is properly configured

You can use the PodDisruptionBudget object to define the minimum number or percentage of pod replicas that must be available at any given time. This configuration protects workloads from disruptions during maintenance tasks such as cluster updates.

However, it is possible to configure the PodDisruptionBudget for a given topology in a way that prevents nodes from being drained and updated during a cluster update.

When planning a cluster update, check the configuration of the PodDisruptionBudget object for the following factors:

  • For highly available workloads, make sure there are replicas that can be temporarily taken offline without being prohibited by the PodDisruptionBudget.
  • For workloads that are not highly available, make sure they are either not protected by a PodDisruptionBudget or have some alternative mechanism for draining these workloads eventually, such as periodic restart or guaranteed eventual termination.

2.2. Preparing to update a cluster with manually maintained credentials

The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default.

  • For minor releases, for example, from 4.12 to 4.13, this status prevents you from updating until you have addressed any updated permissions and annotated the CloudCredential resource to indicate that the permissions are updated as needed for the next version. This annotation changes the Upgradable status to True.
  • For z-stream releases, for example, from 4.13.0 to 4.13.1, no permissions are added or changed, so the update is not blocked.

Before updating a cluster with manually maintained credentials, you must accommodate any new or changed credentials in the release image for the version of OpenShift Container Platform you are updating to.

2.2.1. Update requirements for clusters with manually maintained credentials

Before you update a cluster that uses manually maintained credentials with the Cloud Credential Operator (CCO), you must update the cloud provider resources for the new release.

If the cloud credential management for your cluster was configured using the CCO utility (ccoctl), use the ccoctl utility to update the resources. Clusters that were configured to use manual mode without the ccoctl utility require manual updates for the resources.

After updating the cloud provider resources, you must update the upgradeable-to annotation for the cluster to indicate that it is ready to update.

Note

The process to update the cloud provider resources and the upgradeable-to annotation can only be completed by using command-line tools.

2.2.1.1. Cloud credential configuration options and update requirements by platform type

Some platforms only support using the CCO in one mode. For clusters that are installed on those platforms, the platform type determines the credentials update requirements.

For platforms that support using the CCO in multiple modes, you must determine which mode the cluster is configured to use and take the required actions for that configuration.

Figure 2.1. Credentials update requirements by platform type

Decision tree showing the possible update paths for your cluster depending on the configured CCO credentials mode.
Red Hat OpenStack Platform (RHOSP) and VMware vSphere

These platforms do not support using the CCO in manual mode. Clusters on these platforms handle changes in cloud provider resources automatically and do not require an update to the upgradeable-to annotation.

Administrators of clusters on these platforms should skip the manually maintained credentials section of the update process.

IBM Cloud and Nutanix

Clusters installed on these platforms are configured using the ccoctl utility.

Administrators of clusters on these platforms must take the following actions:

  1. Extract and prepare the CredentialsRequest custom resources (CRs) for the new release.
  2. Configure the ccoctl utility for the new release and use it to update the cloud provider resources.
  3. Indicate that the cluster is ready to update with the upgradeable-to annotation.
Microsoft Azure Stack Hub

These clusters use manual mode with long-term credentials and do not use the ccoctl utility.

Administrators of clusters on these platforms must take the following actions:

  1. Extract and prepare the CredentialsRequest custom resources (CRs) for the new release.
  2. Manually update the cloud provider resources for the new release.
  3. Indicate that the cluster is ready to update with the upgradeable-to annotation.
Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP)

Clusters installed on these platforms support multiple CCO modes.

The required update process depends on the mode that the cluster is configured to use. If you are not sure what mode the CCO is configured to use on your cluster, you can use the web console or the CLI to determine this information.

2.2.1.2. Determining the Cloud Credential Operator mode by using the web console

You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the web console.

Note

Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes.

Prerequisites

  • You have access to an OpenShift Container Platform account with cluster administrator permissions.

Procedure

  1. Log in to the OpenShift Container Platform web console as a user with the cluster-admin role.
  2. Navigate to Administration Cluster Settings.
  3. On the Cluster Settings page, select the Configuration tab.
  4. Under Configuration resource, select CloudCredential.
  5. On the CloudCredential details page, select the YAML tab.
  6. In the YAML block, check the value of spec.credentialsMode. The following values are possible, though not all are supported on all platforms:

    • '': The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation.
    • Mint: The CCO is operating in mint mode.
    • Passthrough: The CCO is operating in passthrough mode.
    • Manual: The CCO is operating in manual mode.
    Important

    To determine the specific configuration of an AWS, GCP, or global Microsoft Azure cluster that has a spec.credentialsMode of '', Mint, or Manual, you must investigate further.

    AWS and GCP clusters support using mint mode with the root secret deleted. If the cluster is specifically configured to use mint mode or uses mint mode by default, you must determine if the root secret is present on the cluster before updating.

    An AWS, GCP, or global Microsoft Azure cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster with AWS STS, GCP Workload Identity, or Microsoft Entra Workload ID. You can determine whether your cluster uses this strategy by examining the cluster Authentication object.

  7. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, navigate to Workloads Secrets and look for the root secret for your cloud provider.

    Note

    Ensure that the Project dropdown is set to All Projects.

    PlatformSecret name

    AWS

    aws-creds

    GCP

    gcp-credentials

    • If you see one of these values, your cluster is using mint or passthrough mode with the root secret present.
    • If you do not see these values, your cluster is using the CCO in mint mode with the root secret removed.
  8. AWS, GCP, or global Microsoft Azure clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, you must check the cluster Authentication object YAML values.

    1. Navigate to Administration Cluster Settings.
    2. On the Cluster Settings page, select the Configuration tab.
    3. Under Configuration resource, select Authentication.
    4. On the Authentication details page, select the YAML tab.
    5. In the YAML block, check the value of the .spec.serviceAccountIssuer parameter.

      • A value that contains a URL that is associated with your cloud provider indicates that the CCO is using manual mode with short-term credentials for components. These clusters are configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster.
      • An empty value ('') indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility.

Next steps

  • If you are updating a cluster that has the CCO operating in mint or passthrough mode and the root secret is present, you do not need to update any cloud provider resources and can continue to the next part of the update process.
  • If your cluster is using the CCO in mint mode with the root secret removed, you must reinstate the credential secret with the administrator-level credential before continuing to the next part of the update process.
  • If your cluster was configured using the CCO utility (ccoctl), you must take the following actions:

    1. Extract and prepare the CredentialsRequest custom resources (CRs) for the new release.
    2. Configure the ccoctl utility for the new release and use it to update the cloud provider resources.
    3. Update the upgradeable-to annotation to indicate that the cluster is ready to update.
  • If your cluster is using the CCO in manual mode but was not configured using the ccoctl utility, you must take the following actions:

    1. Extract and prepare the CredentialsRequest custom resources (CRs) for the new release.
    2. Manually update the cloud provider resources for the new release.
    3. Update the upgradeable-to annotation to indicate that the cluster is ready to update.

2.2.1.3. Determining the Cloud Credential Operator mode by using the CLI

You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the CLI.

Note

Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes.

Prerequisites

  • You have access to an OpenShift Container Platform account with cluster administrator permissions.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Log in to oc on the cluster as a user with the cluster-admin role.
  2. To determine the mode that the CCO is configured to use, enter the following command:

    $ oc get cloudcredentials cluster \
      -o=jsonpath={.spec.credentialsMode}
    Copy to Clipboard

    The following output values are possible, though not all are supported on all platforms:

    • '': The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation.
    • Mint: The CCO is operating in mint mode.
    • Passthrough: The CCO is operating in passthrough mode.
    • Manual: The CCO is operating in manual mode.
    Important

    To determine the specific configuration of an AWS, GCP, or global Microsoft Azure cluster that has a spec.credentialsMode of '', Mint, or Manual, you must investigate further.

    AWS and GCP clusters support using mint mode with the root secret deleted. If the cluster is specifically configured to use mint mode or uses mint mode by default, you must determine if the root secret is present on the cluster before updating.

    An AWS, GCP, or global Microsoft Azure cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster with AWS STS, GCP Workload Identity, or Microsoft Entra Workload ID. You can determine whether your cluster uses this strategy by examining the cluster Authentication object.

  3. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, run the following command:

    $ oc get secret <secret_name> \
      -n=kube-system
    Copy to Clipboard

    where <secret_name> is aws-creds for AWS or gcp-credentials for GCP.

    If the root secret is present, the output of this command returns information about the secret. An error indicates that the root secret is not present on the cluster.

  4. AWS, GCP, or global Microsoft Azure clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, run the following command:

    $ oc get authentication cluster \
      -o jsonpath \
      --template='{ .spec.serviceAccountIssuer }'
    Copy to Clipboard

    This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object.

    • An output of a URL that is associated with your cloud provider indicates that the CCO is using manual mode with short-term credentials for components. These clusters are configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster.
    • An empty output indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility.

Next steps

  • If you are updating a cluster that has the CCO operating in mint or passthrough mode and the root secret is present, you do not need to update any cloud provider resources and can continue to the next part of the update process.
  • If your cluster is using the CCO in mint mode with the root secret removed, you must reinstate the credential secret with the administrator-level credential before continuing to the next part of the update process.
  • If your cluster was configured using the CCO utility (ccoctl), you must take the following actions:

    1. Extract and prepare the CredentialsRequest custom resources (CRs) for the new release.
    2. Configure the ccoctl utility for the new release and use it to update the cloud provider resources.
    3. Update the upgradeable-to annotation to indicate that the cluster is ready to update.
  • If your cluster is using the CCO in manual mode but was not configured using the ccoctl utility, you must take the following actions:

    1. Extract and prepare the CredentialsRequest custom resources (CRs) for the new release.
    2. Manually update the cloud provider resources for the new release.
    3. Update the upgradeable-to annotation to indicate that the cluster is ready to update.

2.2.2. Extracting and preparing credentials request resources

Before updating a cluster that uses the Cloud Credential Operator (CCO) in manual mode, you must extract and prepare the CredentialsRequest custom resources (CRs) for the new release.

Prerequisites

  • Install the OpenShift CLI (oc) that matches the version for your updated version.
  • Log in to the cluster as user with cluster-admin privileges.

Procedure

  1. Obtain the pull spec for the update that you want to apply by running the following command:

    $ oc adm upgrade
    Copy to Clipboard

    The output of this command includes pull specs for the available updates similar to the following:

    Partial example output

    ...
    Recommended updates:
    
    VERSION IMAGE
    4.19.0  quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032
    ...
    Copy to Clipboard

  2. Set a $RELEASE_IMAGE variable with the release image that you want to use by running the following command:

    $ RELEASE_IMAGE=<update_pull_spec>
    Copy to Clipboard

    where <update_pull_spec> is the pull spec for the release image that you want to use. For example:

    quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032
    Copy to Clipboard
  3. Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command:

    $ oc adm release extract \
      --from=$RELEASE_IMAGE \
      --credentials-requests \
      --included \
    1
    
      --to=<path_to_directory_for_credentials_requests> 
    2
    Copy to Clipboard
    1
    The --included parameter includes only the manifests that your specific cluster configuration requires for the target release.
    2
    Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it.

    This command creates a YAML file for each CredentialsRequest object.

  4. For each CredentialsRequest CR in the release image, ensure that a namespace that matches the text in the spec.secretRef.namespace field exists in the cluster. This field is where the generated secrets that hold the credentials configuration are stored.

    Sample AWS CredentialsRequest object

    apiVersion: cloudcredential.openshift.io/v1
    kind: CredentialsRequest
    metadata:
      name: cloud-credential-operator-iam-ro
      namespace: openshift-cloud-credential-operator
    spec:
      providerSpec:
        apiVersion: cloudcredential.openshift.io/v1
        kind: AWSProviderSpec
        statementEntries:
        - effect: Allow
          action:
          - iam:GetUser
          - iam:GetUserPolicy
          - iam:ListAccessKeys
          resource: "*"
      secretRef:
        name: cloud-credential-operator-iam-ro-creds
        namespace: openshift-cloud-credential-operator 
    1
    Copy to Clipboard

    1
    This field indicates the namespace which must exist to hold the generated secret.

    The CredentialsRequest CRs for other platforms have a similar format with different platform-specific values.

  5. For any CredentialsRequest CR for which the cluster does not already have a namespace with the name specified in spec.secretRef.namespace, create the namespace by running the following command:

    $ oc create namespace <component_namespace>
    Copy to Clipboard

Next steps

  • If the cloud credential management for your cluster was configured using the CCO utility (ccoctl), configure the ccoctl utility for a cluster update and use it to update your cloud provider resources.
  • If your cluster was not configured with the ccoctl utility, manually update your cloud provider resources.

2.2.3. Configuring the Cloud Credential Operator utility for a cluster update

To upgrade a cluster that uses the Cloud Credential Operator (CCO) in manual mode to create and manage cloud credentials from outside of the cluster, extract and prepare the CCO utility (ccoctl) binary.

Note

The ccoctl utility is a Linux binary that must run in a Linux environment.

Prerequisites

  • You have access to an OpenShift Container Platform account with cluster administrator access.
  • You have installed the OpenShift CLI (oc).
  • Your cluster was configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster.
  • You have extracted the CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image and ensured that a namespace that matches the text in the spec.secretRef.namespace field exists in the cluster.

Procedure

  1. Set a variable for the OpenShift Container Platform release image by running the following command:

    $ RELEASE_IMAGE=$(oc get clusterversion -o jsonpath={..desired.image})
    Copy to Clipboard
  2. Obtain the CCO container image from the OpenShift Container Platform release image by running the following command:

    $ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE -a ~/.pull-secret)
    Copy to Clipboard
    Note

    Ensure that the architecture of the $RELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool.

  3. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command:

    $ oc image extract $CCO_IMAGE \
      --file="/usr/bin/ccoctl.<rhel_version>" \
    1
    
      -a ~/.pull-secret
    Copy to Clipboard
    1
    For <rhel_version>, specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid:
    • rhel8: Specify this value for hosts that use RHEL 8.
    • rhel9: Specify this value for hosts that use RHEL 9.
  4. Change the permissions to make ccoctl executable by running the following command:

    $ chmod 775 ccoctl.<rhel_version>
    Copy to Clipboard

Verification

  • To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example:

    $ ./ccoctl.rhel9
    Copy to Clipboard

    Example output

    OpenShift credentials provisioning tool
    
    Usage:
      ccoctl [command]
    
    Available Commands:
      aws          Manage credentials objects for AWS cloud
      azure        Manage credentials objects for Azure
      gcp          Manage credentials objects for Google cloud
      help         Help about any command
      ibmcloud     Manage credentials objects for {ibm-cloud-title}
      nutanix      Manage credentials objects for Nutanix
    
    Flags:
      -h, --help   help for ccoctl
    
    Use "ccoctl [command] --help" for more information about a command.
    Copy to Clipboard

2.2.4. Updating cloud provider resources with the Cloud Credential Operator utility

The process for upgrading an OpenShift Container Platform cluster that was configured using the CCO utility (ccoctl) is similar to creating the cloud provider resources during installation.

Note

On AWS clusters, some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters.

Prerequisites

  • You have extracted the CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image and ensured that a namespace that matches the text in the spec.secretRef.namespace field exists in the cluster.
  • You have extracted and configured the ccoctl binary from the release image.

Procedure

  1. Use the ccoctl tool to process all CredentialsRequest objects by running the command for your cloud provider. The following commands process CredentialsRequest objects:

    Example 2.1. Amazon Web Services (AWS)

    $ ccoctl aws create-all \
    1
    
      --name=<name> \
    2
    
      --region=<aws_region> \
    3
    
      --credentials-requests-dir=<path_to_credentials_requests_directory> \
    4
    
      --output-dir=<path_to_ccoctl_output_dir> \
    5
    
      --create-private-s3-bucket 
    6
    Copy to Clipboard
    1
    To create the AWS resources individually, use the "Creating AWS resources individually" procedure in the "Installing a cluster on AWS with customizations" content. This option might be useful if you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization.
    2
    Specify the name used to tag any cloud resources that are created for tracking.
    3
    Specify the AWS region in which cloud resources will be created.
    4
    Specify the directory containing the files for the component CredentialsRequest objects.
    5
    Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run.
    6
    Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter.

    Example 2.2. Google Cloud Platform (GCP)

    $ ccoctl gcp create-all \
      --name=<name> \
    1
    
      --region=<gcp_region> \
    2
    
      --project=<gcp_project_id> \
    3
    
      --credentials-requests-dir=<path_to_credentials_requests_directory> \
    4
    
      --output-dir=<path_to_ccoctl_output_dir> 
    5
    Copy to Clipboard
    1
    Specify the user-defined name for all created GCP resources used for tracking.
    2
    Specify the GCP region in which cloud resources will be created.
    3
    Specify the GCP project ID in which cloud resources will be created.
    4
    Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts.
    5
    Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run.

    Example 2.3. IBM Cloud

    $ ccoctl ibmcloud create-service-id \
      --credentials-requests-dir=<path_to_credential_requests_directory> \
    1
    
      --name=<cluster_name> \
    2
    
      --output-dir=<installation_directory> \
    3
    
      --resource-group-name=<resource_group_name> 
    4
    Copy to Clipboard
    1
    Specify the directory containing the files for the component CredentialsRequest objects.
    2
    Specify the name of the OpenShift Container Platform cluster.
    3
    Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run.
    4
    Optional: Specify the name of the resource group used for scoping the access policies.

    Example 2.4. Microsoft Azure

    $ ccoctl azure create-managed-identities \
      --name <azure_infra_name> \
    1
    
      --output-dir ./output_dir \
      --region <azure_region> \
    2
    
      --subscription-id <azure_subscription_id> \
    3
    
      --credentials-requests-dir <path_to_directory_for_credentials_requests> \
      --issuer-url "${OIDC_ISSUER_URL}" \
    4
    
      --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \
    5
    
      --installation-resource-group-name "${AZURE_INSTALL_RG}" 
    6
    Copy to Clipboard
    1
    The value of the name parameter is used to create an Azure resource group. To use an existing Azure resource group instead of creating a new one, specify the --oidc-resource-group-name argument with the existing group name as its value.
    2
    Specify the region of the existing cluster.
    3
    Specify the subscription ID of the existing cluster.
    4
    Specify the OIDC issuer URL from the existing cluster. You can obtain this value by running the following command:
    $ oc get authentication cluster \
      -o jsonpath \
      --template='{ .spec.serviceAccountIssuer }'
    Copy to Clipboard
    5
    Specify the name of the resource group that contains the DNS zone.
    6
    Specify the Azure resource group name. You can obtain this value by running the following command:
    $ oc get infrastructure cluster \
      -o jsonpath \
      --template '{ .status.platformStatus.azure.resourceGroupName }'
    Copy to Clipboard

    Example 2.5. Nutanix

    $ ccoctl nutanix create-shared-secrets \
      --credentials-requests-dir=<path_to_credentials_requests_directory> \
    1
    
      --output-dir=<ccoctl_output_dir> \
    2
    
      --credentials-source-filepath=<path_to_credentials_file> 
    3
    Copy to Clipboard
    1
    Specify the path to the directory that contains the files for the component CredentialsRequests objects.
    2
    Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run.
    3
    Optional: Specify the directory that contains the credentials data YAML file. By default, ccoctl expects this file to be in <home_directory>/.nutanix/credentials.

    For each CredentialsRequest object, ccoctl creates the required provider resources and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image.

  2. Apply the secrets to your cluster by running the following command:

    $ ls <path_to_ccoctl_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {}
    Copy to Clipboard

Verification

You can verify that the required provider resources and permissions policies are created by querying the cloud provider. For more information, refer to your cloud provider documentation on listing roles or service accounts.

Next steps

  • Update the upgradeable-to annotation to indicate that the cluster is ready to upgrade.

2.2.5. Manually updating cloud provider resources

Before upgrading a cluster with manually maintained credentials, you must create secrets for any new credentials for the release image that you are upgrading to. You must also review the required permissions for existing credentials and accommodate any new permissions requirements in the new release for those components.

Prerequisites

  • You have extracted the CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image and ensured that a namespace that matches the text in the spec.secretRef.namespace field exists in the cluster.

Procedure

  1. Create YAML files with secrets for any CredentialsRequest custom resources that the new release image adds. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object.

    Example 2.6. Sample AWS YAML files

    Sample AWS CredentialsRequest object with secrets

    apiVersion: cloudcredential.openshift.io/v1
    kind: CredentialsRequest
    metadata:
      name: <component_credentials_request>
      namespace: openshift-cloud-credential-operator
      ...
    spec:
      providerSpec:
        apiVersion: cloudcredential.openshift.io/v1
        kind: AWSProviderSpec
        statementEntries:
        - effect: Allow
          action:
          - s3:CreateBucket
          - s3:DeleteBucket
          resource: "*"
          ...
      secretRef:
        name: <component_secret>
        namespace: <component_namespace>
      ...
    Copy to Clipboard

    Sample AWS Secret object

    apiVersion: v1
    kind: Secret
    metadata:
      name: <component_secret>
      namespace: <component_namespace>
    data:
      aws_access_key_id: <base64_encoded_aws_access_key_id>
      aws_secret_access_key: <base64_encoded_aws_secret_access_key>
    Copy to Clipboard

    Example 2.7. Sample Azure YAML files

    Note

    Global Azure and Azure Stack Hub use the same CredentialsRequest object and secret formats.

    Sample Azure CredentialsRequest object with secrets

    apiVersion: cloudcredential.openshift.io/v1
    kind: CredentialsRequest
    metadata:
      name: <component_credentials_request>
      namespace: openshift-cloud-credential-operator
      ...
    spec:
      providerSpec:
        apiVersion: cloudcredential.openshift.io/v1
        kind: AzureProviderSpec
        roleBindings:
        - role: Contributor
          ...
      secretRef:
        name: <component_secret>
        namespace: <component_namespace>
      ...
    Copy to Clipboard

    Sample Azure Secret object

    apiVersion: v1
    kind: Secret
    metadata:
      name: <component_secret>
      namespace: <component_namespace>
    data:
      azure_subscription_id: <base64_encoded_azure_subscription_id>
      azure_client_id: <base64_encoded_azure_client_id>
      azure_client_secret: <base64_encoded_azure_client_secret>
      azure_tenant_id: <base64_encoded_azure_tenant_id>
      azure_resource_prefix: <base64_encoded_azure_resource_prefix>
      azure_resourcegroup: <base64_encoded_azure_resourcegroup>
      azure_region: <base64_encoded_azure_region>
    Copy to Clipboard

    Example 2.8. Sample GCP YAML files

    Sample GCP CredentialsRequest object with secrets

    apiVersion: cloudcredential.openshift.io/v1
    kind: CredentialsRequest
    metadata:
      name: <component_credentials_request>
      namespace: openshift-cloud-credential-operator
      ...
    spec:
      providerSpec:
        apiVersion: cloudcredential.openshift.io/v1
        kind: GCPProviderSpec
          predefinedRoles:
          - roles/iam.securityReviewer
          - roles/iam.roleViewer
          skipServiceCheck: true
          ...
      secretRef:
        name: <component_secret>
        namespace: <component_namespace>
      ...
    Copy to Clipboard

    Sample GCP Secret object

    apiVersion: v1
    kind: Secret
    metadata:
      name: <component_secret>
      namespace: <component_namespace>
    data:
      service_account.json: <base64_encoded_gcp_service_account_file>
    Copy to Clipboard

  2. If the CredentialsRequest custom resources for any existing credentials that are stored in secrets have changed permissions requirements, update the permissions as required.

Next steps

  • Update the upgradeable-to annotation to indicate that the cluster is ready to upgrade.

2.2.6. Indicating that the cluster is ready to upgrade

The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default.

Prerequisites

  • For the release image that you are upgrading to, you have processed any new credentials manually or by using the Cloud Credential Operator utility (ccoctl).
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Log in to oc on the cluster as a user with the cluster-admin role.
  2. Edit the CloudCredential resource to add an upgradeable-to annotation within the metadata field by running the following command:

    $ oc edit cloudcredential cluster
    Copy to Clipboard

    Text to add

    ...
      metadata:
        annotations:
          cloudcredential.openshift.io/upgradeable-to: <version_number>
    ...
    Copy to Clipboard

    Where <version_number> is the version that you are upgrading to, in the format x.y.z. For example, use 4.12.2 for OpenShift Container Platform 4.12.2.

    It may take several minutes after adding the annotation for the upgradeable status to change.

Verification

  1. In the Administrator perspective of the web console, navigate to Administration Cluster Settings.
  2. To view the CCO status details, click cloud-credential in the Cluster Operators list.

    • If the Upgradeable status in the Conditions section is False, verify that the upgradeable-to annotation is free of typographical errors.
  3. When the Upgradeable status in the Conditions section is True, begin the OpenShift Container Platform upgrade.

2.3. Preflight validation for Kernel Module Management (KMM) Modules

Before performing an upgrade on the cluster with applied KMM modules, you must verify that kernel modules installed using KMM are able to be installed on the nodes after the cluster upgrade and possible kernel upgrade. Preflight attempts to validate every Module loaded in the cluster, in parallel. Preflight does not wait for validation of one Module to complete before starting validation of another Module.

2.3.1. Validation kickoff

Preflight validation is triggered by creating a PreflightValidationOCP resource in the cluster. This resource contains the following fields:

dtkImage

The DTK container image released for the specific OpenShift Container Platform version of the cluster. If this value is not set, the DTK_AUTO feature cannot be used.

You can obtain the image by running one of the following commands in the cluster:

# For x86_64 image:
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.19.0-x86_64 --image-for=driver-toolkit
Copy to Clipboard
# For ARM64 image:
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.19.0-aarch64 --image-for=driver-toolkit
Copy to Clipboard
kernelVersion

Required field that provides the version of the kernel that the cluster is upgraded to.

You can obtain the version by running the following command in the cluster:

$ podman run -it --rm $(oc adm release info quay.io/openshift-release-dev/ocp-release:4.19.0-x86_64 --image-for=driver-toolkit) cat /etc/driver-toolkit-release.json
Copy to Clipboard
pushBuiltImage
If true, then the images created during the Build and Sign validation are pushed to their repositories. This field is false by default.

2.3.2. Validation lifecycle

Preflight validation attempts to validate every module loaded in the cluster. Preflight stops running validation on a Module resource after the validation is successful. If module validation fails, you can change the module definitions and Preflight tries to validate the module again in the next loop.

If you want to run Preflight validation for an additional kernel, then you should create another PreflightValidationOCP resource for that kernel. After all the modules have been validated, it is recommended to delete the PreflightValidationOCP resource.

2.3.3. Validation status

A PreflightValidationOCP resource reports the status and progress of each module in the cluster that it attempts or has attempted to validate in its .status.modules list. Elements of that list contain the following fields:

name
The name of the Module resource.
namespace
The namespace of the Module resource.
statusReason
Verbal explanation regarding the status.
verificationStage

Describes the validation stage being executed:

  • Image: Image existence verification
  • Done: Verification is done
verificationStatus

The status of the Module verification:

  • Success: Verified
  • Failure: Verification failed
  • InProgress: Verification is in progress

2.3.4. Image validation stage

Image validation is always the first stage of the preflight validation to be executed. If image validation is successful, no other validations are run on that specific module. The Operator uses the container runtime to check the image existence and accessibility for the updaded kernel in the module.

If the image validation fails and there is a build/sign section in the module that is relevant to the upgraded kernel, the controller tries to build or sign the image. If the PushBuiltImage flag is defined in the PreflightValidationOCP resource, the controller will also try to push the resulting image into its repository. The resulting image name is taken from the definition of the containerImage field of the Module CR.

Note

In case a build section exists, the input image in the sign section is used as the output image by the build section. Therefore, in order for the input image to be available to the sign section, the PushBuiltImage flag must be defined in the PreflightValidationOCP CR.

2.3.5. Example PreflightValidationOCP resource

The following example shows a PreflightValidationOCP resource in the YAML format.

The example verifies all of the currently present modules against the upcoming 5.14.0-570.19.1.el9_6.x86_64 kernel. Because .spec.pushBuiltImage is set to true, KMM pushes the resulting images of Build/Sign into the defined repositories.

apiVersion: kmm.sigs.x-k8s.io/v1beta2
kind: PreflightValidationOCP
metadata:
  name: preflight
spec:
  kernelVersion: 5.14.0-570.19.1.el9_6.x86_64
  dtkImage: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe0322730440f1cbe6fffaaa8cac131b56574bec8abe3ec5b462e17557fecb32
  pushBuiltImage: true
Copy to Clipboard

2.4. Preparing to update from OpenShift Container Platform 4.18 to a newer version

Before you update from OpenShift Container Platform 4.18 to a newer version, learn about some of the specific concerns around Red Hat Enterprise Linux (RHEL) compute machines.

2.4.1. Migrating workloads off of package-based RHEL worker nodes

With the introduction of OpenShift Container Platform 4.19, package-based RHEL worker nodes are no longer supported. If you try to update your cluster while those nodes are up and running, the update will fail.

You can reschedule pods running on RHEL compute nodes to run on your RHCOS nodes instead by using node selectors.

For example, the following Node object has a label for its operating system information, in this case RHCOS:

Sample Node object with RHCOS label

kind: Node
apiVersion: v1
metadata:
  name: ip-10-0-131-14.ec2.internal
  selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal
  uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74
  resourceVersion: '478704'
  creationTimestamp: '2019-06-10T14:46:08Z'
  labels:
    kubernetes.io/os: linux
    failure-domain.beta.kubernetes.io/zone: us-east-1a
    node.openshift.io/os_version: '4.19'
    node-role.kubernetes.io/worker: ''
    failure-domain.beta.kubernetes.io/region: us-east-1
    node.openshift.io/os_id: rhcos 
1

    beta.kubernetes.io/instance-type: m4.large
    kubernetes.io/hostname: ip-10-0-131-14
    beta.kubernetes.io/arch: amd64
#...
Copy to Clipboard

1
The label identifying the operating system that runs on the node, to match the pod node selector.

Any pods that you want to schedule to new RHCOS nodes must contain a matching label in its nodeSelector field. The following procedure describes how to add the label.

Procedure

  1. Deschedule the RHEL node currently running your existing pods by entering the following command:

    $ oc adm cordon <rhel-node>
    Copy to Clipboard
  2. Add an rhcos node selector to a pod:

    • To add the node selector to existing and future pods, add the node selector to the controller object for the pods by entering the following command:

      Example Deployment object with rhcos label

      $ oc patch dc <my-app> -p '{"spec":{"template":{"spec":{"nodeSelector":{"node.openshift.io/os_id":"rhcos"}}}}}'
      Copy to Clipboard

      Any existing pods under your Deployment controlling object will be re-created on your RHCOS nodes.

    • To add the node selector to a specific, new pod, add the selector to the Pod object directly:

      Example Pod object with rhcos label

      apiVersion: v1
      kind: Pod
      metadata:
        name: <my-app>
      #...
      spec:
        nodeSelector:
          node.openshift.io/os_id: rhcos
      #...
      Copy to Clipboard

      The new pod will be created on RHCOS nodes, assuming the pod also has a controlling object.

2.4.2. Identifying and removing RHEL worker nodes

With the introduction of OpenShift Container Platform 4.19, package-based RHEL worker nodes are no longer supported. The following procedure describes how to identify RHEL nodes for cluster removal on bare-metal installations. You must complete the following steps to successfully update your cluster.

Procedure

  1. Identify nodes in your cluster that are running RHEL by entering the following command:

    $ oc get -l node.openshift.io/os_id=rhel
    Copy to Clipboard

    Example output

    NAME                        STATUS    ROLES     AGE       VERSION
    rhel-node1.example.com      Ready     worker    7h        v1.32.3
    rhel-node2.example.com      Ready     worker    7h        v1.32.3
    rhel-node3.example.com      Ready     worker    7h        v1.32.3
    Copy to Clipboard

  2. Continue with the node removal process. RHEL nodes are not managed by the Machine API and have no compute machine sets associated with them. You must unschedule and drain the node before you manually delete it from the cluster.

    For more information on this process, see How to remove a worker node from Red Hat OpenShift Container Platform 4 UPI.

2.4.3. Provisioning new RHCOS worker nodes

If you need additional compute nodes for your workloads, you can provision new ones either before or after you update your cluster. For more information, see the following machine management documentation:

For installer-provisioned infrastructure installations, automatic scaling adds RHCOS nodes by default. For user-provisioned infrastructure installations on bare metal platforms, you can manually add RHCOS compute nodes to your cluster.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat