Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 2. Preparing to update a cluster
2.1. Preparing to update to OpenShift Container Platform 4.14 Link kopierenLink in die Zwischenablage kopiert!
Cluster admins must complete certain administrative tasks to successfully initialize an update. Consider using optional guidelines for ensuring a successful update.
For a cluster that runs on VMware vSphere, you can use vSphere Container Storage Interface (CSI) automatic migration to provision in-tree storage plugins to their counterpart CSI drivers. For more information, see vSphere CSI automatic migration.
2.1.1. RHEL 9.2 micro-architecture requirement change Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform is now based on the RHEL 9.2 host operating system. The micro-architecture requirements are now increased to x86_64-v2, Power9, and Z14. See the RHEL micro-architecture requirements documentation. You can verify compatibility before updating by following the procedures outlined in this KCS article.
Without the correct micro-architecture requirements, the update process fails. Ensure that you purchase the appropriate subscription for each architecture. For more information, see Get Started with Red Hat Enterprise Linux (RHEL) - additional architectures
2.1.2. Kubernetes API deprecations and removals Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform 4.14 uses Kubernetes 1.27, which removed several deprecated APIs.
A cluster administrator must provide a manual acknowledgment before the cluster can be updated from OpenShift Container Platform 4.13 to 4.14. This is to help prevent issues after upgrading to OpenShift Container Platform 4.14, where APIs that have been removed are still in use by workloads, tools, or other components running on or interacting with the cluster. Administrators must evaluate their cluster for any APIs in use that will be removed and migrate the affected components to use the appropriate new API version. After this evaluation and migration is complete, the administrator can provide the acknowledgment.
Before you can update your OpenShift Container Platform 4.13 cluster to 4.14, you must provide the administrator acknowledgment.
2.1.2.1. Removed Kubernetes APIs Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform 4.14 uses Kubernetes 1.27, which removed the following deprecated APIs. You must migrate manifests and API clients to use the appropriate API version. For more information about migrating removed APIs, see the Kubernetes documentation.
| Resource | Removed API | Migrate to |
|---|---|---|
|
|
|
|
2.1.2.2. Evaluating your cluster for removed APIs Link kopierenLink in die Zwischenablage kopiert!
There are several methods to help administrators identify where APIs that will be removed are in use. However, OpenShift Container Platform cannot identify all instances, especially workloads that are idle or external tools that are used. It is the responsibility of the administrator to properly evaluate all workloads and other integrations for instances of removed APIs.
2.1.2.2.1. Reviewing alerts to identify uses of removed APIs Link kopierenLink in die Zwischenablage kopiert!
Two alerts fire when an API is in use that will be removed in the next release:
-
- for APIs that will be removed in the next OpenShift Container Platform release.
APIRemovedInNextReleaseInUse -
- for APIs that will be removed in the next OpenShift Container Platform Extended Update Support (EUS) release.
APIRemovedInNextEUSReleaseInUse
Procedure
- If either of the alerts are firing in your cluster, review the alerts and take action to clear the alerts by migrating manifests and API clients to use the new API version.
Verification
-
Use the API to get more information about which APIs are in use and which workloads are using removed APIs, because the alerts do not provide this information. Additionally, some APIs might not trigger these alerts but are still captured by
APIRequestCount. The alerts are tuned to be less sensitive to avoid alerting fatigue in production systems.APIRequestCount
2.1.2.2.2. Using APIRequestCount to identify uses of removed APIs Link kopierenLink in die Zwischenablage kopiert!
You can use the
APIRequestCount
Prerequisites
-
You must have access to the cluster as a user with the role.
cluster-admin
Procedure
Run the following command and examine the
column of the output to identify the removed APIs that are currently in use:REMOVEDINRELEASE$ oc get apirequestcountsExample output
NAME REMOVEDINRELEASE REQUESTSINCURRENTHOUR REQUESTSINLAST24H ... csistoragecapacities.v1.storage.k8s.io 14 380 csistoragecapacities.v1beta1.storage.k8s.io 1.27 0 16 custompolicydefinitions.v1beta1.capabilities.3scale.net 8 158 customresourcedefinitions.v1.apiextensions.k8s.io 1407 30148 ...ImportantYou can safely ignore the following entries that appear in the results:
-
The and the
system:serviceaccount:kube-system:generic-garbage-collectorusers might appear in the results because these services invoke all registered APIs when searching for resources to remove.system:serviceaccount:kube-system:namespace-controller -
The and
system:kube-controller-managerusers might appear in the results because they walk through all resources while enforcing various policies.system:cluster-policy-controller
You can also use
to filter the results:-o jsonpath$ oc get apirequestcounts -o jsonpath='{range .items[?(@.status.removedInRelease!="")]}{.status.removedInRelease}{"\t"}{.metadata.name}{"\n"}{end}'Example output
1.27 csistoragecapacities.v1beta1.storage.k8s.io 1.29 flowschemas.v1beta2.flowcontrol.apiserver.k8s.io 1.29 prioritylevelconfigurations.v1beta2.flowcontrol.apiserver.k8s.io-
The
2.1.2.2.3. Using APIRequestCount to identify which workloads are using the removed APIs Link kopierenLink in die Zwischenablage kopiert!
You can examine the
APIRequestCount
Prerequisites
-
You must have access to the cluster as a user with the role.
cluster-admin
Procedure
Run the following command and examine the
andusernamefields to help identify the workloads that are using the API:userAgent$ oc get apirequestcounts <resource>.<version>.<group> -o yamlFor example:
$ oc get apirequestcounts csistoragecapacities.v1beta1.storage.k8s.io -o yamlYou can also use
to extract the-o jsonpathandusernamevalues from anuserAgentresource:APIRequestCount$ oc get apirequestcounts csistoragecapacities.v1beta1.storage.k8s.io \ -o jsonpath='{range .status.currentHour..byUser[*]}{..byVerb[*].verb}{","}{.username}{","}{.userAgent}{"\n"}{end}' \ | sort -k 2 -t, -u | column -t -s, -NVERBS,USERNAME,USERAGENTExample output
VERBS USERNAME USERAGENT list watch system:kube-controller-manager cluster-policy-controller/v0.0.0 list watch system:kube-controller-manager kube-controller-manager/v1.26.5+0abcdef list watch system:kube-scheduler kube-scheduler/v1.26.5+0abcdef
2.1.2.3. Migrating instances of removed APIs Link kopierenLink in die Zwischenablage kopiert!
For information about how to migrate removed Kubernetes APIs, see the Deprecated API Migration Guide in the Kubernetes documentation.
2.1.2.4. Providing the administrator acknowledgment Link kopierenLink in die Zwischenablage kopiert!
After you have evaluated your cluster for any removed APIs and have migrated any removed APIs, you can acknowledge that your cluster is ready to upgrade from OpenShift Container Platform 4.13 to 4.14.
Be aware that all responsibility falls on the administrator to ensure that all uses of removed APIs have been resolved and migrated as necessary before providing this administrator acknowledgment. OpenShift Container Platform can assist with the evaluation, but cannot identify all possible uses of removed APIs, especially idle workloads or external tools.
Prerequisites
-
You must have access to the cluster as a user with the role.
cluster-admin
Procedure
Run the following command to acknowledge that you have completed the evaluation and your cluster is ready for the Kubernetes API removals in OpenShift Container Platform 4.14:
$ oc -n openshift-config patch cm admin-acks --patch '{"data":{"ack-4.13-kube-1.27-api-removals-in-4.14":"true"}}' --type=merge
2.1.3. Assessing the risk of conditional updates Link kopierenLink in die Zwischenablage kopiert!
A conditional update is an update target that is available but not recommended due to a known risk that applies to your cluster. The Cluster Version Operator (CVO) periodically queries the OpenShift Update Service (OSUS) for the most recent data about update recommendations, and some potential update targets might have risks associated with them.
The CVO evaluates the conditional risks, and if the risks are not applicable to the cluster, then the target version is available as a recommended update path for the cluster. If the risk is determined to be applicable, or if for some reason CVO cannot evaluate the risk, then the update target is available to the cluster as a conditional update.
When you encounter a conditional update while you are trying to update to a target version, you must assess the risk of updating your cluster to that version. Generally, if you do not have a specific need to update to that target version, it is best to wait for a recommended update path from Red Hat.
However, if you have a strong reason to update to that version, for example, if you need to fix an important CVE, then the benefit of fixing the CVE might outweigh the risk of the update being problematic for your cluster. You can complete the following tasks to determine whether you agree with the Red Hat assessment of the update risk:
- Complete extensive testing in a non-production environment to the extent that you are comfortable completing the update in your production environment.
- Follow the links provided in the conditional update description, investigate the bug, and determine if it is likely to cause issues for your cluster. If you need help understanding the risk, contact Red Hat Support.
2.1.4. etcd backups before cluster updates Link kopierenLink in die Zwischenablage kopiert!
etcd backups record the state of your cluster and all of its resource objects. You can use backups to attempt restoring the state of a cluster in disaster scenarios where you cannot recover a cluster in its currently dysfunctional state.
In the context of updates, you can attempt an etcd restoration of the cluster if an update introduced catastrophic conditions that cannot be fixed without reverting to the previous cluster version. etcd restorations might be destructive and destabilizing to a running cluster, use them only as a last resort.
Due to their high consequences, etcd restorations are not intended to be used as a rollback solution. Rolling your cluster back to a previous version is not supported. If your update is failing to complete, contact Red Hat support.
There are several factors that affect the viability of an etcd restoration. For more information, see "Backing up etcd data" and "Restoring to a previous cluster state".
2.1.5. Best practices for cluster updates Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform provides a robust update experience that minimizes workload disruptions during an update. Updates will not begin unless the cluster is in an upgradeable state at the time of the update request.
This design enforces some key conditions before initiating an update, but there are a number of actions you can take to increase your chances of a successful cluster update.
2.1.5.1. Choose versions recommended by the OpenShift Update Service Link kopierenLink in die Zwischenablage kopiert!
The OpenShift Update Service (OSUS) provides update recommendations based on cluster characteristics such as the cluster’s subscribed channel. The Cluster Version Operator saves these recommendations as either recommended or conditional updates. While it is possible to attempt an update to a version that is not recommended by OSUS, following a recommended update path protects users from encountering known issues or unintended consequences on the cluster.
Choose only update targets that are recommended by OSUS to ensure a successful update.
2.1.5.2. Address all critical alerts on the cluster Link kopierenLink in die Zwischenablage kopiert!
Critical alerts must always be addressed as soon as possible, but it is especially important to address these alerts and resolve any problems before initiating a cluster update. Failing to address critical alerts before beginning an update can cause problematic conditions for the cluster.
In the Administrator perspective of the web console, navigate to Observe
2.1.5.2.1. Ensure that duplicated encoding headers are removed Link kopierenLink in die Zwischenablage kopiert!
Before updating, you will receive a
DuplicateTransferEncodingHeadersDetected
Transfer-Encoding
Transfer-Encoding
To mitigate this issue, update any problematic applications to no longer send multiple
Transfer-Encoding
For more information, see this Red Hat Knowledgebase article.
2.1.5.3. Ensure that the cluster is in an Upgradable state Link kopierenLink in die Zwischenablage kopiert!
When one or more Operators have not reported their
Upgradeable
True
ClusterNotUpgradeable
Upgradeable
True
For more information about the
Upgradeable
2.1.5.4. Ensure that enough spare nodes are available Link kopierenLink in die Zwischenablage kopiert!
A cluster should not be running with little to no spare node capacity, especially when initiating a cluster update. Nodes that are not running and available may limit a cluster’s ability to perform an update with minimal disruption to cluster workloads.
Depending on the configured value of the cluster’s
maxUnavailable
Make sure that you have enough available nodes in each worker pool, as well as enough spare capacity on your compute nodes, to increase the chance of successful node updates.
The default setting for
maxUnavailable
1
3
2.1.5.5. Ensure that the cluster’s PodDisruptionBudget is properly configured Link kopierenLink in die Zwischenablage kopiert!
You can use the
PodDisruptionBudget
However, it is possible to configure the
PodDisruptionBudget
When planning a cluster update, check the configuration of the
PodDisruptionBudget
-
For highly available workloads, make sure there are replicas that can be temporarily taken offline without being prohibited by the .
PodDisruptionBudget -
For workloads that aren’t highly available, make sure they are either not protected by a or have some alternative mechanism for draining these workloads eventually, such as periodic restart or guaranteed eventual termination.
PodDisruptionBudget
2.2. Preparing to update a cluster with manually maintained credentials Link kopierenLink in die Zwischenablage kopiert!
The Cloud Credential Operator (CCO)
Upgradable
False
-
For minor releases, for example, from 4.12 to 4.13, this status prevents you from updating until you have addressed any updated permissions and annotated the resource to indicate that the permissions are updated as needed for the next version. This annotation changes the
CloudCredentialstatus toUpgradable.True - For z-stream releases, for example, from 4.13.0 to 4.13.1, no permissions are added or changed, so the update is not blocked.
Before updating a cluster with manually maintained credentials, you must accommodate any new or changed credentials in the release image for the version of OpenShift Container Platform you are updating to.
2.2.1. Update requirements for clusters with manually maintained credentials Link kopierenLink in die Zwischenablage kopiert!
Before you update a cluster that uses manually maintained credentials with the Cloud Credential Operator (CCO), you must update the cloud provider resources for the new release.
If the cloud credential management for your cluster was configured using the CCO utility (
ccoctl
ccoctl
ccoctl
After updating the cloud provider resources, you must update the
upgradeable-to
The process to update the cloud provider resources and the
upgradeable-to
2.2.1.1. Cloud credential configuration options and update requirements by platform type Link kopierenLink in die Zwischenablage kopiert!
Some platforms only support using the CCO in one mode. For clusters that are installed on those platforms, the platform type determines the credentials update requirements.
For platforms that support using the CCO in multiple modes, you must determine which mode the cluster is configured to use and take the required actions for that configuration.
Figure 2.1. Credentials update requirements by platform type
- Red Hat OpenStack Platform (RHOSP) and VMware vSphere
These platforms do not support using the CCO in manual mode. Clusters on these platforms handle changes in cloud provider resources automatically and do not require an update to the
annotation.upgradeable-toAdministrators of clusters on these platforms should skip the manually maintained credentials section of the update process.
- IBM Cloud and Nutanix
Clusters installed on these platforms are configured using the
utility.ccoctlAdministrators of clusters on these platforms must take the following actions:
-
Extract and prepare the custom resources (CRs) for the new release.
CredentialsRequest -
Configure the utility for the new release and use it to update the cloud provider resources.
ccoctl -
Indicate that the cluster is ready to update with the annotation.
upgradeable-to
-
Extract and prepare the
- Microsoft Azure Stack Hub
These clusters use manual mode with long-term credentials and do not use the
utility.ccoctlAdministrators of clusters on these platforms must take the following actions:
-
Extract and prepare the custom resources (CRs) for the new release.
CredentialsRequest - Manually update the cloud provider resources for the new release.
-
Indicate that the cluster is ready to update with the annotation.
upgradeable-to
-
Extract and prepare the
- Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud
Clusters installed on these platforms support multiple CCO modes.
The required update process depends on the mode that the cluster is configured to use. If you are not sure what mode the CCO is configured to use on your cluster, you can use the web console or the CLI to determine this information.
2.2.1.2. Determining the Cloud Credential Operator mode by using the web console Link kopierenLink in die Zwischenablage kopiert!
You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the web console.
Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud clusters support multiple CCO modes.
Prerequisites
- You have access to an OpenShift Container Platform account with cluster administrator permissions.
Procedure
-
Log in to the OpenShift Container Platform web console as a user with the role.
cluster-admin -
Navigate to Administration
Cluster Settings. - On the Cluster Settings page, select the Configuration tab.
- Under Configuration resource, select CloudCredential.
- On the CloudCredential details page, select the YAML tab.
In the YAML block, check the value of
. The following values are possible, though not all are supported on all platforms:spec.credentialsMode-
: The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation.
'' -
: The CCO is operating in mint mode.
Mint -
: The CCO is operating in passthrough mode.
Passthrough -
: The CCO is operating in manual mode.
Manual
ImportantTo determine the specific configuration of an AWS, Google Cloud, or global Microsoft Azure cluster that has a
ofspec.credentialsMode,'', orMint, you must investigate further.ManualAWS and Google Cloud clusters support using mint mode with the root secret deleted. If the cluster is specifically configured to use mint mode or uses mint mode by default, you must determine if the root secret is present on the cluster before updating.
An AWS, Google Cloud, or global Microsoft Azure cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster with AWS STS, Google Cloud Workload Identity, or Microsoft Entra Workload ID. You can determine whether your cluster uses this strategy by examining the cluster
object.Authentication-
AWS or Google Cloud clusters that use mint mode only: To determine whether the cluster is operating without the root secret, navigate to Workloads
Secrets and look for the root secret for your cloud provider. NoteEnsure that the Project dropdown is set to All Projects.
Expand Platform Secret name AWS
aws-credsGoogle Cloud
gcp-credentials- If you see one of these values, your cluster is using mint or passthrough mode with the root secret present.
- If you do not see these values, your cluster is using the CCO in mint mode with the root secret removed.
AWS, Google Cloud, or global Microsoft Azure clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, you must check the cluster
object YAML values.Authentication-
Navigate to Administration
Cluster Settings. - On the Cluster Settings page, select the Configuration tab.
- Under Configuration resource, select Authentication.
- On the Authentication details page, select the YAML tab.
In the YAML block, check the value of the
parameter..spec.serviceAccountIssuer-
A value that contains a URL that is associated with your cloud provider indicates that the CCO is using manual mode with short-term credentials for components. These clusters are configured using the utility to create and manage cloud credentials from outside of the cluster.
ccoctl -
An empty value () indicates that the cluster is using the CCO in manual mode but was not configured using the
''utility.ccoctl
-
A value that contains a URL that is associated with your cloud provider indicates that the CCO is using manual mode with short-term credentials for components. These clusters are configured using the
-
Navigate to Administration
Next steps
- If you are updating a cluster that has the CCO operating in mint or passthrough mode and the root secret is present, you do not need to update any cloud provider resources and can continue to the next part of the update process.
- If your cluster is using the CCO in mint mode with the root secret removed, you must reinstate the credential secret with the administrator-level credential before continuing to the next part of the update process.
If your cluster was configured using the CCO utility (
), you must take the following actions:ccoctl-
Extract and prepare the custom resources (CRs) for the new release.
CredentialsRequest -
Configure the utility for the new release and use it to update the cloud provider resources.
ccoctl -
Update the annotation to indicate that the cluster is ready to update.
upgradeable-to
-
Extract and prepare the
If your cluster is using the CCO in manual mode but was not configured using the
utility, you must take the following actions:ccoctl-
Extract and prepare the custom resources (CRs) for the new release.
CredentialsRequest - Manually update the cloud provider resources for the new release.
-
Update the annotation to indicate that the cluster is ready to update.
upgradeable-to
-
Extract and prepare the
2.2.1.3. Determining the Cloud Credential Operator mode by using the CLI Link kopierenLink in die Zwischenablage kopiert!
You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the CLI.
Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud clusters support multiple CCO modes.
Prerequisites
- You have access to an OpenShift Container Platform account with cluster administrator permissions.
-
You have installed the OpenShift CLI ().
oc
Procedure
-
Log in to on the cluster as a user with the
ocrole.cluster-admin To determine the mode that the CCO is configured to use, enter the following command:
$ oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode}The following output values are possible, though not all are supported on all platforms:
-
: The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation.
'' -
: The CCO is operating in mint mode.
Mint -
: The CCO is operating in passthrough mode.
Passthrough -
: The CCO is operating in manual mode.
Manual
ImportantTo determine the specific configuration of an AWS, Google Cloud, or global Microsoft Azure cluster that has a
ofspec.credentialsMode,'', orMint, you must investigate further.ManualAWS and Google Cloud clusters support using mint mode with the root secret deleted. If the cluster is specifically configured to use mint mode or uses mint mode by default, you must determine if the root secret is present on the cluster before updating.
An AWS, Google Cloud, or global Microsoft Azure cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster with AWS STS, Google Cloud Workload Identity, or Microsoft Entra Workload ID. You can determine whether your cluster uses this strategy by examining the cluster
object.Authentication-
AWS or Google Cloud clusters that use mint mode only: To determine whether the cluster is operating without the root secret, run the following command:
$ oc get secret <secret_name> \ -n=kube-systemwhere
is<secret_name>for AWS oraws-credsfor Google Cloud.gcp-credentialsIf the root secret is present, the output of this command returns information about the secret. An error indicates that the root secret is not present on the cluster.
AWS, Google Cloud, or global Microsoft Azure clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, run the following command:
$ oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }'This command displays the value of the
parameter in the cluster.spec.serviceAccountIssuerobject.Authentication-
An output of a URL that is associated with your cloud provider indicates that the CCO is using manual mode with short-term credentials for components. These clusters are configured using the utility to create and manage cloud credentials from outside of the cluster.
ccoctl -
An empty output indicates that the cluster is using the CCO in manual mode but was not configured using the utility.
ccoctl
-
An output of a URL that is associated with your cloud provider indicates that the CCO is using manual mode with short-term credentials for components. These clusters are configured using the
Next steps
- If you are updating a cluster that has the CCO operating in mint or passthrough mode and the root secret is present, you do not need to update any cloud provider resources and can continue to the next part of the update process.
- If your cluster is using the CCO in mint mode with the root secret removed, you must reinstate the credential secret with the administrator-level credential before continuing to the next part of the update process.
If your cluster was configured using the CCO utility (
), you must take the following actions:ccoctl-
Extract and prepare the custom resources (CRs) for the new release.
CredentialsRequest -
Configure the utility for the new release and use it to update the cloud provider resources.
ccoctl -
Update the annotation to indicate that the cluster is ready to update.
upgradeable-to
-
Extract and prepare the
If your cluster is using the CCO in manual mode but was not configured using the
utility, you must take the following actions:ccoctl-
Extract and prepare the custom resources (CRs) for the new release.
CredentialsRequest - Manually update the cloud provider resources for the new release.
-
Update the annotation to indicate that the cluster is ready to update.
upgradeable-to
-
Extract and prepare the
2.2.2. Extracting and preparing credentials request resources Link kopierenLink in die Zwischenablage kopiert!
Before updating a cluster that uses the Cloud Credential Operator (CCO) in manual mode, you must extract and prepare the
CredentialsRequest
Prerequisites
-
Install the OpenShift CLI () that matches the version for your updated version.
oc -
Log in to the cluster as user with privileges.
cluster-admin
Procedure
Obtain the pull spec for the update that you want to apply by running the following command:
$ oc adm upgradeThe output of this command includes pull specs for the available updates similar to the following:
Partial example output
... Recommended updates: VERSION IMAGE 4.14.0 quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032 ...Set a
variable with the release image that you want to use by running the following command:$RELEASE_IMAGE$ RELEASE_IMAGE=<update_pull_spec>where
is the pull spec for the release image that you want to use. For example:<update_pull_spec>quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032Extract the list of
custom resources (CRs) from the OpenShift Container Platform release image by running the following command:CredentialsRequest$ oc adm release extract \ --from=$RELEASE_IMAGE \ --credentials-requests \ --included \1 --to=<path_to_directory_for_credentials_requests>2 This command creates a YAML file for each
object.CredentialsRequestFor each
CR in the release image, ensure that a namespace that matches the text in theCredentialsRequestfield exists in the cluster. This field is where the generated secrets that hold the credentials configuration are stored.spec.secretRef.namespaceSample AWS
CredentialsRequestobjectapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cloud-credential-operator-iam-ro namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" secretRef: name: cloud-credential-operator-iam-ro-creds namespace: openshift-cloud-credential-operator1 - 1
- This field indicates the namespace which must exist to hold the generated secret.
The
CRs for other platforms have a similar format with different platform-specific values.CredentialsRequestFor any
CR for which the cluster does not already have a namespace with the name specified inCredentialsRequest, create the namespace by running the following command:spec.secretRef.namespace$ oc create namespace <component_namespace>
Next steps
-
If the cloud credential management for your cluster was configured using the CCO utility (), configure the
ccoctlutility for a cluster update and use it to update your cloud provider resources.ccoctl -
If your cluster was not configured with the utility, manually update your cloud provider resources.
ccoctl
2.2.3. Configuring the Cloud Credential Operator utility for a cluster update Link kopierenLink in die Zwischenablage kopiert!
To upgrade a cluster that uses the Cloud Credential Operator (CCO) in manual mode to create and manage cloud credentials from outside of the cluster, extract and prepare the CCO utility (
ccoctl
The
ccoctl
Prerequisites
- You have access to an OpenShift Container Platform account with cluster administrator access.
-
You have installed the OpenShift CLI ().
oc
-
Your cluster was configured using the utility to create and manage cloud credentials from outside of the cluster.
ccoctl -
You have extracted the custom resources (CRs) from the OpenShift Container Platform release image and ensured that a namespace that matches the text in the
CredentialsRequestfield exists in the cluster.spec.secretRef.namespace
Procedure
Set a variable for the OpenShift Container Platform release image by running the following command:
$ RELEASE_IMAGE=$(oc get clusterversion -o jsonpath={..desired.image})Obtain the CCO container image from the OpenShift Container Platform release image by running the following command:
$ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE -a ~/.pull-secret)NoteEnsure that the architecture of the
matches the architecture of the environment in which you will use the$RELEASE_IMAGEtool.ccoctlExtract the
binary from the CCO container image within the OpenShift Container Platform release image by running the following command:ccoctl$ oc image extract $CCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secretChange the permissions to make
executable by running the following command:ccoctl$ chmod 775 ccoctl
Verification
To verify that
is ready to use, display the help file. Use a relative file name when you run the command, for example:ccoctl$ ./ccoctl.rhel9Example output
OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command.
2.2.4. Updating cloud provider resources with the Cloud Credential Operator utility Link kopierenLink in die Zwischenablage kopiert!
The process for upgrading an OpenShift Container Platform cluster that was configured using the CCO utility (
ccoctl
On AWS clusters, some
ccoctl
--dry-run
--cli-input-json
Prerequisites
-
You have extracted the custom resources (CRs) from the OpenShift Container Platform release image and ensured that a namespace that matches the text in the
CredentialsRequestfield exists in the cluster.spec.secretRef.namespace -
You have extracted and configured the binary from the release image.
ccoctl
Procedure
Use the
tool to process allccoctlobjects by running the command for your cloud provider. The following commands processCredentialsRequestobjects:CredentialsRequestExample 2.1. Amazon Web Services (AWS)
$ ccoctl aws create-all \1 --name=<name> \2 --region=<aws_region> \3 --credentials-requests-dir=<path_to_credentials_requests_directory> \4 --output-dir=<path_to_ccoctl_output_dir> \5 --create-private-s3-bucket6 - 1
- To create the AWS resources individually, use the "Creating AWS resources individually" procedure in the "Installing a cluster on AWS with customizations" content. This option might be useful if you need to review the JSON files that the
ccoctltool creates before modifying AWS resources, or if the process theccoctltool uses to create AWS resources automatically does not meet the requirements of your organization. - 2
- Specify the name used to tag any cloud resources that are created for tracking.
- 3
- Specify the AWS region in which cloud resources will be created.
- 4
- Specify the directory containing the files for the component
CredentialsRequestobjects. - 5
- Optional: Specify the directory in which you want the
ccoctlutility to create objects. By default, the utility creates objects in the directory in which the commands are run. - 6
- Optional: By default, the
ccoctlutility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the--create-private-s3-bucketparameter.
Example 2.2. Google Cloud
$ ccoctl gcp create-all \ --name=<name> \1 --region=<gcp_region> \2 --project=<gcp_project_id> \3 --credentials-requests-dir=<path_to_credentials_requests_directory> \4 --output-dir=<path_to_ccoctl_output_dir>5 - 1
- Specify the user-defined name for all created Google Cloud resources used for tracking.
- 2
- Specify the Google Cloud region in which cloud resources will be created.
- 3
- Specify the Google Cloud project ID in which cloud resources will be created.
- 4
- Specify the directory containing the files of
CredentialsRequestmanifests to create Google Cloud service accounts. - 5
- Optional: Specify the directory in which you want the
ccoctlutility to create objects. By default, the utility creates objects in the directory in which the commands are run.
Example 2.3. IBM Cloud
$ ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \1 --name=<cluster_name> \2 --output-dir=<installation_directory> \3 --resource-group-name=<resource_group_name>4 - 1
- Specify the directory containing the files for the component
CredentialsRequestobjects. - 2
- Specify the name of the OpenShift Container Platform cluster.
- 3
- Optional: Specify the directory in which you want the
ccoctlutility to create objects. By default, the utility creates objects in the directory in which the commands are run. - 4
- Optional: Specify the name of the resource group used for scoping the access policies.
Example 2.4. Microsoft Azure
$ ccoctl azure create-managed-identities \ --name <azure_infra_name> \1 --output-dir ./output_dir \ --region <azure_region> \2 --subscription-id <azure_subscription_id> \3 --credentials-requests-dir <path_to_directory_for_credentials_requests> \ --issuer-url "${OIDC_ISSUER_URL}" \4 --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \5 --installation-resource-group-name "${AZURE_INSTALL_RG}"6 - 1
- The value of the
nameparameter is used to create an Azure resource group. To use an existing Azure resource group instead of creating a new one, specify the--oidc-resource-group-nameargument with the existing group name as its value. - 2
- Specify the region of the existing cluster.
- 3
- Specify the subscription ID of the existing cluster.
- 4
- Specify the OIDC issuer URL from the existing cluster. You can obtain this value by running the following command:
$ oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' - 5
- Specify the name of the resource group that contains the DNS zone.
- 6
- Specify the Azure resource group name. You can obtain this value by running the following command:
$ oc get infrastructure cluster \ -o jsonpath \ --template '{ .status.platformStatus.azure.resourceGroupName }'
Example 2.5. Nutanix
$ ccoctl nutanix create-shared-secrets \ --credentials-requests-dir=<path_to_credentials_requests_directory> \1 --output-dir=<ccoctl_output_dir> \2 --credentials-source-filepath=<path_to_credentials_file>3 - 1
- Specify the path to the directory that contains the files for the component
CredentialsRequestsobjects. - 2
- Optional: Specify the directory in which you want the
ccoctlutility to create objects. By default, the utility creates objects in the directory in which the commands are run. - 3
- Optional: Specify the directory that contains the credentials data YAML file. By default,
ccoctlexpects this file to be in<home_directory>/.nutanix/credentials.
For each
object,CredentialsRequestcreates the required provider resources and a permissions policy as defined in eachccoctlobject from the OpenShift Container Platform release image.CredentialsRequestApply the secrets to your cluster by running the following command:
$ ls <path_to_ccoctl_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {}
Verification
You can verify that the required provider resources and permissions policies are created by querying the cloud provider. For more information, refer to your cloud provider documentation on listing roles or service accounts.
Next steps
-
Update the annotation to indicate that the cluster is ready to upgrade.
upgradeable-to
2.2.5. Manually updating cloud provider resources Link kopierenLink in die Zwischenablage kopiert!
Before upgrading a cluster with manually maintained credentials, you must create secrets for any new credentials for the release image that you are upgrading to. You must also review the required permissions for existing credentials and accommodate any new permissions requirements in the new release for those components.
Prerequisites
-
You have extracted the custom resources (CRs) from the OpenShift Container Platform release image and ensured that a namespace that matches the text in the
CredentialsRequestfield exists in the cluster.spec.secretRef.namespace
Procedure
Create YAML files with secrets for any
custom resources that the new release image adds. The secrets must be stored using the namespace and secret name defined in theCredentialsRequestfor eachspec.secretRefobject.CredentialsRequestExample 2.6. Sample AWS YAML files
Sample AWS
CredentialsRequestobject with secretsapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ...Sample AWS
SecretobjectapiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>Example 2.7. Sample Azure YAML files
NoteGlobal Azure and Azure Stack Hub use the same
object and secret formats.CredentialsRequestSample Azure
CredentialsRequestobject with secretsapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ...Sample Azure
SecretobjectapiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>Example 2.8. Sample Google Cloud YAML files
Sample Google Cloud
CredentialsRequestobject with secretsapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/iam.securityReviewer - roles/iam.roleViewer skipServiceCheck: true ... secretRef: name: <component_secret> namespace: <component_namespace> ...Sample Google Cloud
SecretobjectapiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>-
If the custom resources for any existing credentials that are stored in secrets have changed permissions requirements, update the permissions as required.
CredentialsRequest
Next steps
-
Update the annotation to indicate that the cluster is ready to upgrade.
upgradeable-to
2.2.6. Indicating that the cluster is ready to upgrade Link kopierenLink in die Zwischenablage kopiert!
The Cloud Credential Operator (CCO)
Upgradable
False
Prerequisites
-
For the release image that you are upgrading to, you have processed any new credentials manually or by using the Cloud Credential Operator utility ().
ccoctl -
You have installed the OpenShift CLI ().
oc
Procedure
-
Log in to on the cluster as a user with the
ocrole.cluster-admin Edit the
resource to add anCloudCredentialannotation within theupgradeable-tofield by running the following command:metadata$ oc edit cloudcredential clusterText to add
... metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number> ...Where
is the version that you are upgrading to, in the format<version_number>. For example, usex.y.zfor OpenShift Container Platform 4.12.2.4.12.2It may take several minutes after adding the annotation for the upgradeable status to change.
Verification
-
In the Administrator perspective of the web console, navigate to Administration
Cluster Settings. To view the CCO status details, click cloud-credential in the Cluster Operators list.
-
If the Upgradeable status in the Conditions section is False, verify that the annotation is free of typographical errors.
upgradeable-to
-
If the Upgradeable status in the Conditions section is False, verify that the
- When the Upgradeable status in the Conditions section is True, begin the OpenShift Container Platform upgrade.
2.3. Preflight validation for Kernel Module Management (KMM) Modules Link kopierenLink in die Zwischenablage kopiert!
Before performing an upgrade on the cluster with applied KMM modules, you must verify that kernel modules installed using KMM are able to be installed on the nodes after the cluster upgrade and possible kernel upgrade. Preflight attempts to validate every
Module
Module
Module
2.3.1. Validation kickoff Link kopierenLink in die Zwischenablage kopiert!
Preflight validation is triggered by creating a
PreflightValidationOCP
releaseImage- Mandatory field that provides the name of the release image for the OpenShift Container Platform version the cluster is upgraded to.
pushBuiltImage-
If
true, then the images created during the Build and Sign validation are pushed to their repositories. This field isfalseby default.
2.3.2. Validation lifecycle Link kopierenLink in die Zwischenablage kopiert!
Preflight validation attempts to validate every module loaded in the cluster. Preflight stops running validation on a
Module
If you want to run Preflight validation for an additional kernel, then you should create another
PreflightValidationOCP
PreflightValidationOCP
2.3.3. Validation status Link kopierenLink in die Zwischenablage kopiert!
A
PreflightValidationOCP
.status.modules
lastTransitionTime-
The last time the
Moduleresource status transitioned from one status to another. This should be when the underlying status has changed. If that is not known, then using the time when the API field changed is acceptable. name-
The name of the
Moduleresource. namespace-
The namespace of the
Moduleresource. statusReason- Verbal explanation regarding the status.
verificationStageDescribes the validation stage being executed:
-
: Image existence verification
image -
: Build process verification
build -
: Sign process verification
sign
-
verificationStatusThe status of the Module verification:
-
: Verified
true -
: Verification failed
false -
: Error during the verification process
error -
: Verification has not started
unknown
-
2.3.4. Preflight validation stages per Module Link kopierenLink in die Zwischenablage kopiert!
Preflight runs the following validations on every KMM Module present in the cluster:
- Image validation stage
- Build validation stage
- Sign validation stage
2.3.4.1. Image validation stage Link kopierenLink in die Zwischenablage kopiert!
Image validation is always the first stage of the preflight validation to be executed. If image validation is successful, no other validations are run on that specific module.
Image validation consists of two stages:
- Image existence and accessibility. The code tries to access the image defined for the upgraded kernel in the module and get its manifests.
-
Verify the presence of the kernel module defined in the in the correct path for future
Moduleexecution. If this validation is successful, it probably means that the kernel module was compiled with the correct Linux headers. The correct path ismodprobe.<dirname>/lib/modules/<upgraded_kernel>/
2.3.4.2. Build validation stage Link kopierenLink in die Zwischenablage kopiert!
Build validation is executed only when image validation has failed and there is a
build
Module
You must specify the kernel version when running
depmod
$ RUN depmod -b /opt ${KERNEL_VERSION}
If the
PushBuiltImage
PreflightValidationOCP
containerImage
Module
If the
sign
containerImage
Module
2.3.4.3. Sign validation stage Link kopierenLink in die Zwischenablage kopiert!
Sign validation is executed only when image validation has failed. There is a
sign
Module
build
Module
If the
PushBuiltImage
PreflightValidationOCP
ContainerImage
Module
UnsignedImage
If a
build
sign
build
sign
PushBuiltImage
PreflightValidationOCP
2.3.5. Example PreflightValidationOCP resource Link kopierenLink in die Zwischenablage kopiert!
This section shows an example of the
PreflightValidationOCP
The example verifies all of the currently present modules against the upcoming kernel version included in the OpenShift Container Platform release 4.11.18, which the following release image points to:
quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863
Because
.spec.pushBuiltImage
true
apiVersion: kmm.sigs.x-k8s.io/v1beta2
kind: PreflightValidationOCP
metadata:
name: preflight
spec:
releaseImage: quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863
pushBuiltImage: true