Chapter 10. Changing the cloud provider credentials configuration
For supported configurations, you can change how OpenShift Container Platform authenticates with your cloud provider.
To determine which cloud credentials strategy your cluster uses, see Determining the Cloud Credential Operator mode.
10.1. Rotating cloud provider service keys with the Cloud Credential Operator utility
Some organizations require the rotation of the service keys that authenticate the cluster. You can use the Cloud Credential Operator (CCO) utility (ccoctl
) to update keys for clusters installed on the following cloud providers:
10.1.1. Rotating AWS OIDC bound service account signer keys
If the Cloud Credential Operator (CCO) for your OpenShift Container Platform cluster on Amazon Web Services (AWS) is configured to operate in manual mode with STS, you can rotate the bound service account signer key.
To rotate the key, you delete the existing key on your cluster, which causes the Kubernetes API server to create a new key. To reduce authentication failures during this process, you must immediately add the new public key to the existing issuer file. After the cluster is using the new key for authentication, you can remove any remaining keys.
The process to rotate OIDC bound service account signer keys is disruptive and takes a significant amount of time. Some steps are time-sensitive. Before proceeding, observe the following considerations:
- Read the following steps and ensure that you understand and accept the time requirement. The exact time requirement varies depending on the individual cluster, but it is likely to require at least one hour.
- To reduce the risk of authentication failures, ensure that you understand and prepare for the time-sensitive steps.
- During this process, you must refresh all service accounts and restart all pods on the cluster. These actions are disruptive to workloads. To mitigate this impact, you can temporarily halt these services and then redeploy them when the cluster is ready.
Prerequisites
-
You have access to the OpenShift CLI (
oc
) as a user with thecluster-admin
role.
You have created an AWS account for the
ccoctl
utility to use with the following permissions:-
s3:GetObject
-
s3:PutObject
-
s3:PutObjectTagging
-
For clusters that store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the
ccoctl
utility requires thecloudfront:ListDistributions
permission.
-
-
You have configured the
ccoctl
utility. Your cluster is in a stable state. You can confirm that the cluster is stable by running the following command:
oc adm wait-for-stable-cluster --minimum-stable-period=5s
$ oc adm wait-for-stable-cluster --minimum-stable-period=5s
Copy to Clipboard Copied!
Procedure
Configure the following environment variables:
INFRA_ID=$(oc get infrastructures cluster -o jsonpath='{.status.infrastructureName}') CLUSTER_NAME=${INFRA_ID%-*}
INFRA_ID=$(oc get infrastructures cluster -o jsonpath='{.status.infrastructureName}') CLUSTER_NAME=${INFRA_ID%-*}
1 Copy to Clipboard Copied! NoteYour cluster might differ from this example, and the resource names might not be derived identically from the cluster name. Ensure that you specify the correct corresponding resource names for your cluster.
For AWS clusters that store the OIDC configuration in a public S3 bucket, configure the following environment variable:
AWS_BUCKET=$(oc get authentication cluster -o jsonpath={'.spec.serviceAccountIssuer'} | awk -F'://' '{print$2}' |awk -F'.' '{print$1}')
AWS_BUCKET=$(oc get authentication cluster -o jsonpath={'.spec.serviceAccountIssuer'} | awk -F'://' '{print$2}' |awk -F'.' '{print$1}')
Copy to Clipboard Copied! For AWS clusters that store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, complete the following steps:
Extract the public CloudFront distribution URL by running the following command:
basename $(oc get authentication cluster -o jsonpath={'.spec.serviceAccountIssuer'} )
$ basename $(oc get authentication cluster -o jsonpath={'.spec.serviceAccountIssuer'} )
Copy to Clipboard Copied! Example output
<subdomain>.cloudfront.net
<subdomain>.cloudfront.net
Copy to Clipboard Copied! where
<subdomain>
is an alphanumeric string.Determine the private S3 bucket name by running the following command:
aws cloudfront list-distributions --query "DistributionList.Items[].{DomainName: DomainName, OriginDomainName: Origins.Items[0].DomainName}[?contains(DomainName, '<subdomain>.cloudfront.net')]"
$ aws cloudfront list-distributions --query "DistributionList.Items[].{DomainName: DomainName, OriginDomainName: Origins.Items[0].DomainName}[?contains(DomainName, '<subdomain>.cloudfront.net')]"
Copy to Clipboard Copied! Example output
[ { "DomainName": "<subdomain>.cloudfront.net", "OriginDomainName": "<s3_bucket>.s3.us-east-2.amazonaws.com" } ]
[ { "DomainName": "<subdomain>.cloudfront.net", "OriginDomainName": "<s3_bucket>.s3.us-east-2.amazonaws.com" } ]
Copy to Clipboard Copied! where
<s3_bucket>
is the private S3 bucket name for your cluster.Configure the following environment variable:
AWS_BUCKET=$<s3_bucket>
AWS_BUCKET=$<s3_bucket>
Copy to Clipboard Copied! where
<s3_bucket>
is the private S3 bucket name for your cluster.
Create a temporary directory to use and assign it an environment variable by running the following command:
TEMPDIR=$(mktemp -d)
$ TEMPDIR=$(mktemp -d)
Copy to Clipboard Copied! To cause the Kubernetes API server to create a new bound service account signing key, you delete the next bound service account signing key.
ImportantAfter you complete this step, the Kubernetes API server starts to roll out a new key. To reduce the risk of authentication failures, complete the remaining steps as quickly as possible. The remaining steps might be disruptive to workloads.
When you are ready, delete the next bound service account signing key by running the following command:
oc delete secrets/next-bound-service-account-signing-key \ -n openshift-kube-apiserver-operator
$ oc delete secrets/next-bound-service-account-signing-key \ -n openshift-kube-apiserver-operator
Copy to Clipboard Copied! Download the public key from the service account signing key secret that the Kubernetes API server created by running the following command:
oc get secret/next-bound-service-account-signing-key \ -n openshift-kube-apiserver-operator \ -ojsonpath='{ .data.service-account\.pub }' | base64 \ -d > ${TEMPDIR}/serviceaccount-signer.public
$ oc get secret/next-bound-service-account-signing-key \ -n openshift-kube-apiserver-operator \ -ojsonpath='{ .data.service-account\.pub }' | base64 \ -d > ${TEMPDIR}/serviceaccount-signer.public
Copy to Clipboard Copied! Use the public key to create a
keys.json
file by running the following command:ccoctl aws create-identity-provider \ --dry-run \ --output-dir ${TEMPDIR} \ --name fake \ --region us-east-1
$ ccoctl aws create-identity-provider \ --dry-run \
1 --output-dir ${TEMPDIR} \ --name fake \
2 --region us-east-1
3 Copy to Clipboard Copied! - 1
- The
--dry-run
option outputs files, including the newkeys.json
file, to the disk without making API calls. - 2
- Because the
--dry-run
option does not make any API calls, some parameters do not require real values. - 3
- Specify any valid AWS region, such as
us-east-1
. This value does not need to match the region the cluster is in.
Rename the
keys.json
file by running the following command:cp ${TEMPDIR}/<number>-keys.json ${TEMPDIR}/jwks.new.json
$ cp ${TEMPDIR}/<number>-keys.json ${TEMPDIR}/jwks.new.json
Copy to Clipboard Copied! where
<number>
is a two-digit numerical value that varies depending on your environment.Download the existing
keys.json
file from the cloud provider by running the following command:aws s3api get-object \ --bucket ${AWS_BUCKET} \ --key keys.json ${TEMPDIR}/jwks.current.json
$ aws s3api get-object \ --bucket ${AWS_BUCKET} \ --key keys.json ${TEMPDIR}/jwks.current.json
Copy to Clipboard Copied! Combine the two
keys.json
files by running the following command:jq -s '{ keys: map(.keys[])}' ${TEMPDIR}/jwks.current.json ${TEMPDIR}/jwks.new.json > ${TEMPDIR}/jwks.combined.json
$ jq -s '{ keys: map(.keys[])}' ${TEMPDIR}/jwks.current.json ${TEMPDIR}/jwks.new.json > ${TEMPDIR}/jwks.combined.json
Copy to Clipboard Copied! To enable authentication for the old and new keys during the rotation, upload the combined
keys.json
file to the cloud provider by running the following command:aws s3api put-object \ --bucket ${AWS_BUCKET} \ --tagging "openshift.io/cloud-credential-operator/${CLUSTER_NAME}=owned" \ --key keys.json \ --body ${TEMPDIR}/jwks.combined.json
$ aws s3api put-object \ --bucket ${AWS_BUCKET} \ --tagging "openshift.io/cloud-credential-operator/${CLUSTER_NAME}=owned" \ --key keys.json \ --body ${TEMPDIR}/jwks.combined.json
Copy to Clipboard Copied! Wait for the Kubernetes API server to update and use the new key. You can monitor the update progress by running the following command:
oc adm wait-for-stable-cluster
$ oc adm wait-for-stable-cluster
Copy to Clipboard Copied! This process might take 15 minutes or longer. The following output indicates that the process is complete:
All clusteroperators are stable
All clusteroperators are stable
Copy to Clipboard Copied! To ensure that all pods on the cluster use the new key, you must restart them.
ImportantThis step maintains uptime for services that are configured for high availability across multiple nodes, but might cause downtime for any services that are not.
Restart all of the pods in the cluster by running the following command:
oc adm reboot-machine-config-pool mcp/worker mcp/master
$ oc adm reboot-machine-config-pool mcp/worker mcp/master
Copy to Clipboard Copied! Monitor the restart and update process by running the following command:
oc adm wait-for-node-reboot nodes --all
$ oc adm wait-for-node-reboot nodes --all
Copy to Clipboard Copied! This process might take 15 minutes or longer. The following output indicates that the process is complete:
All nodes rebooted
All nodes rebooted
Copy to Clipboard Copied! Monitor the update progress by running the following command:
oc adm wait-for-stable-cluster
$ oc adm wait-for-stable-cluster
Copy to Clipboard Copied! This process might take 15 minutes or longer. The following output indicates that the process is complete:
All clusteroperators are stable
All clusteroperators are stable
Copy to Clipboard Copied! Replace the combined
keys.json
file with the updatedkeys.json
file on the cloud provider by running the following command:aws s3api put-object \ --bucket ${AWS_BUCKET} \ --tagging "openshift.io/cloud-credential-operator/${CLUSTER_NAME}=owned" \ --key keys.json \ --body ${TEMPDIR}/jwks.new.json
$ aws s3api put-object \ --bucket ${AWS_BUCKET} \ --tagging "openshift.io/cloud-credential-operator/${CLUSTER_NAME}=owned" \ --key keys.json \ --body ${TEMPDIR}/jwks.new.json
Copy to Clipboard Copied!
10.1.2. Rotating GCP OIDC bound service account signer keys
If the Cloud Credential Operator (CCO) for your OpenShift Container Platform cluster on Google Cloud Platform (GCP) is configured to operate in manual mode with GCP Workload Identity, you can rotate the bound service account signer key.
To rotate the key, you delete the existing key on your cluster, which causes the Kubernetes API server to create a new key. To reduce authentication failures during this process, you must immediately add the new public key to the existing issuer file. After the cluster is using the new key for authentication, you can remove any remaining keys.
The process to rotate OIDC bound service account signer keys is disruptive and takes a significant amount of time. Some steps are time-sensitive. Before proceeding, observe the following considerations:
- Read the following steps and ensure that you understand and accept the time requirement. The exact time requirement varies depending on the individual cluster, but it is likely to require at least one hour.
- To reduce the risk of authentication failures, ensure that you understand and prepare for the time-sensitive steps.
- During this process, you must refresh all service accounts and restart all pods on the cluster. These actions are disruptive to workloads. To mitigate this impact, you can temporarily halt these services and then redeploy them when the cluster is ready.
Prerequisites
-
You have access to the OpenShift CLI (
oc
) as a user with thecluster-admin
role.
You have added one of the following authentication options to the GCP account that the
ccoctl
utility uses:- The IAM Workload Identity Pool Admin role
The following granular permissions:
-
storage.objects.create
-
storage.objects.delete
-
-
You have configured the
ccoctl
utility. Your cluster is in a stable state. You can confirm that the cluster is stable by running the following command:
oc adm wait-for-stable-cluster --minimum-stable-period=5s
$ oc adm wait-for-stable-cluster --minimum-stable-period=5s
Copy to Clipboard Copied!
Procedure
Configure the following environment variables:
CURRENT_ISSUER=$(oc get authentication cluster -o jsonpath='{.spec.serviceAccountIssuer}') GCP_BUCKET=$(echo ${CURRENT_ISSUER} | cut -d "/" -f4)
CURRENT_ISSUER=$(oc get authentication cluster -o jsonpath='{.spec.serviceAccountIssuer}') GCP_BUCKET=$(echo ${CURRENT_ISSUER} | cut -d "/" -f4)
Copy to Clipboard Copied! NoteYour cluster might differ from this example, and the resource names might not be derived identically from the cluster name. Ensure that you specify the correct corresponding resource names for your cluster.
Create a temporary directory to use and assign it an environment variable by running the following command:
TEMPDIR=$(mktemp -d)
$ TEMPDIR=$(mktemp -d)
Copy to Clipboard Copied! To cause the Kubernetes API server to create a new bound service account signing key, you delete the next bound service account signing key.
ImportantAfter you complete this step, the Kubernetes API server starts to roll out a new key. To reduce the risk of authentication failures, complete the remaining steps as quickly as possible. The remaining steps might be disruptive to workloads.
When you are ready, delete the next bound service account signing key by running the following command:
oc delete secrets/next-bound-service-account-signing-key \ -n openshift-kube-apiserver-operator
$ oc delete secrets/next-bound-service-account-signing-key \ -n openshift-kube-apiserver-operator
Copy to Clipboard Copied! Download the public key from the service account signing key secret that the Kubernetes API server created by running the following command:
oc get secret/next-bound-service-account-signing-key \ -n openshift-kube-apiserver-operator \ -ojsonpath='{ .data.service-account\.pub }' | base64 \ -d > ${TEMPDIR}/serviceaccount-signer.public
$ oc get secret/next-bound-service-account-signing-key \ -n openshift-kube-apiserver-operator \ -ojsonpath='{ .data.service-account\.pub }' | base64 \ -d > ${TEMPDIR}/serviceaccount-signer.public
Copy to Clipboard Copied! Use the public key to create a
keys.json
file by running the following command:ccoctl gcp create-workload-identity-provider \ --dry-run \ --output-dir=${TEMPDIR} \ --name fake \ --project fake \ --workload-identity-pool fake
$ ccoctl gcp create-workload-identity-provider \ --dry-run \
1 --output-dir=${TEMPDIR} \ --name fake \
2 --project fake \ --workload-identity-pool fake
Copy to Clipboard Copied! Rename the
keys.json
file by running the following command:cp ${TEMPDIR}/<number>-keys.json ${TEMPDIR}/jwks.new.json
$ cp ${TEMPDIR}/<number>-keys.json ${TEMPDIR}/jwks.new.json
Copy to Clipboard Copied! where
<number>
is a two-digit numerical value that varies depending on your environment.Download the existing
keys.json
file from the cloud provider by running the following command:gcloud storage cp gs://${GCP_BUCKET}/keys.json ${TEMPDIR}/jwks.current.json
$ gcloud storage cp gs://${GCP_BUCKET}/keys.json ${TEMPDIR}/jwks.current.json
Copy to Clipboard Copied! Combine the two
keys.json
files by running the following command:jq -s '{ keys: map(.keys[])}' ${TEMPDIR}/jwks.current.json ${TEMPDIR}/jwks.new.json > ${TEMPDIR}/jwks.combined.json
$ jq -s '{ keys: map(.keys[])}' ${TEMPDIR}/jwks.current.json ${TEMPDIR}/jwks.new.json > ${TEMPDIR}/jwks.combined.json
Copy to Clipboard Copied! To enable authentication for the old and new keys during the rotation, upload the combined
keys.json
file to the cloud provider by running the following command:gcloud storage cp ${TEMPDIR}/jwks.combined.json gs://${GCP_BUCKET}/keys.json
$ gcloud storage cp ${TEMPDIR}/jwks.combined.json gs://${GCP_BUCKET}/keys.json
Copy to Clipboard Copied! Wait for the Kubernetes API server to update and use the new key. You can monitor the update progress by running the following command:
oc adm wait-for-stable-cluster
$ oc adm wait-for-stable-cluster
Copy to Clipboard Copied! This process might take 15 minutes or longer. The following output indicates that the process is complete:
All clusteroperators are stable
All clusteroperators are stable
Copy to Clipboard Copied! To ensure that all pods on the cluster use the new key, you must restart them.
ImportantThis step maintains uptime for services that are configured for high availability across multiple nodes, but might cause downtime for any services that are not.
Restart all of the pods in the cluster by running the following command:
oc adm reboot-machine-config-pool mcp/worker mcp/master
$ oc adm reboot-machine-config-pool mcp/worker mcp/master
Copy to Clipboard Copied! Monitor the restart and update process by running the following command:
oc adm wait-for-node-reboot nodes --all
$ oc adm wait-for-node-reboot nodes --all
Copy to Clipboard Copied! This process might take 15 minutes or longer. The following output indicates that the process is complete:
All nodes rebooted
All nodes rebooted
Copy to Clipboard Copied! Monitor the update progress by running the following command:
oc adm wait-for-stable-cluster
$ oc adm wait-for-stable-cluster
Copy to Clipboard Copied! This process might take 15 minutes or longer. The following output indicates that the process is complete:
All clusteroperators are stable
All clusteroperators are stable
Copy to Clipboard Copied! Replace the combined
keys.json
file with the updatedkeys.json
file on the cloud provider by running the following command:gcloud storage cp ${TEMPDIR}/jwks.new.json gs://${GCP_BUCKET}/keys.json
$ gcloud storage cp ${TEMPDIR}/jwks.new.json gs://${GCP_BUCKET}/keys.json
Copy to Clipboard Copied!
10.1.3. Rotating Azure OIDC bound service account signer keys
If the Cloud Credential Operator (CCO) for your OpenShift Container Platform cluster on Microsoft Azure is configured to operate in manual mode with Microsoft Entra Workload ID, you can rotate the bound service account signer key.
To rotate the key, you delete the existing key on your cluster, which causes the Kubernetes API server to create a new key. To reduce authentication failures during this process, you must immediately add the new public key to the existing issuer file. After the cluster is using the new key for authentication, you can remove any remaining keys.
The process to rotate OIDC bound service account signer keys is disruptive and takes a significant amount of time. Some steps are time-sensitive. Before proceeding, observe the following considerations:
- Read the following steps and ensure that you understand and accept the time requirement. The exact time requirement varies depending on the individual cluster, but it is likely to require at least one hour.
- To reduce the risk of authentication failures, ensure that you understand and prepare for the time-sensitive steps.
- During this process, you must refresh all service accounts and restart all pods on the cluster. These actions are disruptive to workloads. To mitigate this impact, you can temporarily halt these services and then redeploy them when the cluster is ready.
Prerequisites
-
You have access to the OpenShift CLI (
oc
) as a user with thecluster-admin
role.
You have created a global Azure account for the
ccoctl
utility to use with the following permissions:-
Microsoft.Storage/storageAccounts/listkeys/action
-
Microsoft.Storage/storageAccounts/read
-
Microsoft.Storage/storageAccounts/write
-
Microsoft.Storage/storageAccounts/blobServices/containers/read
-
Microsoft.Storage/storageAccounts/blobServices/containers/write
-
-
You have configured the
ccoctl
utility. Your cluster is in a stable state. You can confirm that the cluster is stable by running the following command:
oc adm wait-for-stable-cluster --minimum-stable-period=5s
$ oc adm wait-for-stable-cluster --minimum-stable-period=5s
Copy to Clipboard Copied!
Procedure
Configure the following environment variables:
CURRENT_ISSUER=$(oc get authentication cluster -o jsonpath='{.spec.serviceAccountIssuer}') AZURE_STORAGE_ACCOUNT=$(echo ${CURRENT_ISSUER} | cut -d "/" -f3 | cut -d "." -f1) AZURE_STORAGE_CONTAINER=$(echo ${CURRENT_ISSUER} | cut -d "/" -f4)
CURRENT_ISSUER=$(oc get authentication cluster -o jsonpath='{.spec.serviceAccountIssuer}') AZURE_STORAGE_ACCOUNT=$(echo ${CURRENT_ISSUER} | cut -d "/" -f3 | cut -d "." -f1) AZURE_STORAGE_CONTAINER=$(echo ${CURRENT_ISSUER} | cut -d "/" -f4)
Copy to Clipboard Copied! NoteYour cluster might differ from this example, and the resource names might not be derived identically from the cluster name. Ensure that you specify the correct corresponding resource names for your cluster.
Create a temporary directory to use and assign it an environment variable by running the following command:
TEMPDIR=$(mktemp -d)
$ TEMPDIR=$(mktemp -d)
Copy to Clipboard Copied! To cause the Kubernetes API server to create a new bound service account signing key, you delete the next bound service account signing key.
ImportantAfter you complete this step, the Kubernetes API server starts to roll out a new key. To reduce the risk of authentication failures, complete the remaining steps as quickly as possible. The remaining steps might be disruptive to workloads.
When you are ready, delete the next bound service account signing key by running the following command:
oc delete secrets/next-bound-service-account-signing-key \ -n openshift-kube-apiserver-operator
$ oc delete secrets/next-bound-service-account-signing-key \ -n openshift-kube-apiserver-operator
Copy to Clipboard Copied! Download the public key from the service account signing key secret that the Kubernetes API server created by running the following command:
oc get secret/next-bound-service-account-signing-key \ -n openshift-kube-apiserver-operator \ -ojsonpath='{ .data.service-account\.pub }' | base64 \ -d > ${TEMPDIR}/serviceaccount-signer.public
$ oc get secret/next-bound-service-account-signing-key \ -n openshift-kube-apiserver-operator \ -ojsonpath='{ .data.service-account\.pub }' | base64 \ -d > ${TEMPDIR}/serviceaccount-signer.public
Copy to Clipboard Copied! Use the public key to create a
keys.json
file by running the following command:ccoctl aws create-identity-provider \ --dry-run \ --output-dir ${TEMPDIR} \ --name fake \ --region us-east-1
$ ccoctl aws create-identity-provider \
1 --dry-run \
2 --output-dir ${TEMPDIR} \ --name fake \
3 --region us-east-1
4 Copy to Clipboard Copied! - 1
- The
ccoctl azure
command does not include a--dry-run
option. To use the--dry-run
option, you must specifyaws
for an Azure cluster. - 2
- The
--dry-run
option outputs files, including the newkeys.json
file, to the disk without making API calls. - 3
- Because the
--dry-run
option does not make any API calls, some parameters do not require real values. - 4
- Specify any valid AWS region, such as
us-east-1
. This value does not need to match the region the cluster is in.
Rename the
keys.json
file by running the following command:cp ${TEMPDIR}/<number>-keys.json ${TEMPDIR}/jwks.new.json
$ cp ${TEMPDIR}/<number>-keys.json ${TEMPDIR}/jwks.new.json
Copy to Clipboard Copied! where
<number>
is a two-digit numerical value that varies depending on your environment.Download the existing
keys.json
file from the cloud provider by running the following command:az storage blob download \ --container-name ${AZURE_STORAGE_CONTAINER} \ --account-name ${AZURE_STORAGE_ACCOUNT} \ --name 'openid/v1/jwks' \ -f ${TEMPDIR}/jwks.current.json
$ az storage blob download \ --container-name ${AZURE_STORAGE_CONTAINER} \ --account-name ${AZURE_STORAGE_ACCOUNT} \ --name 'openid/v1/jwks' \ -f ${TEMPDIR}/jwks.current.json
Copy to Clipboard Copied! Combine the two
keys.json
files by running the following command:jq -s '{ keys: map(.keys[])}' ${TEMPDIR}/jwks.current.json ${TEMPDIR}/jwks.new.json > ${TEMPDIR}/jwks.combined.json
$ jq -s '{ keys: map(.keys[])}' ${TEMPDIR}/jwks.current.json ${TEMPDIR}/jwks.new.json > ${TEMPDIR}/jwks.combined.json
Copy to Clipboard Copied! To enable authentication for the old and new keys during the rotation, upload the combined
keys.json
file to the cloud provider by running the following command:az storage blob upload \ --overwrite \ --account-name ${AZURE_STORAGE_ACCOUNT} \ --container-name ${AZURE_STORAGE_CONTAINER} \ --name 'openid/v1/jwks' \ -f ${TEMPDIR}/jwks.combined.json
$ az storage blob upload \ --overwrite \ --account-name ${AZURE_STORAGE_ACCOUNT} \ --container-name ${AZURE_STORAGE_CONTAINER} \ --name 'openid/v1/jwks' \ -f ${TEMPDIR}/jwks.combined.json
Copy to Clipboard Copied! Wait for the Kubernetes API server to update and use the new key. You can monitor the update progress by running the following command:
oc adm wait-for-stable-cluster
$ oc adm wait-for-stable-cluster
Copy to Clipboard Copied! This process might take 15 minutes or longer. The following output indicates that the process is complete:
All clusteroperators are stable
All clusteroperators are stable
Copy to Clipboard Copied! To ensure that all pods on the cluster use the new key, you must restart them.
ImportantThis step maintains uptime for services that are configured for high availability across multiple nodes, but might cause downtime for any services that are not.
Restart all of the pods in the cluster by running the following command:
oc adm reboot-machine-config-pool mcp/worker mcp/master
$ oc adm reboot-machine-config-pool mcp/worker mcp/master
Copy to Clipboard Copied! Monitor the restart and update process by running the following command:
oc adm wait-for-node-reboot nodes --all
$ oc adm wait-for-node-reboot nodes --all
Copy to Clipboard Copied! This process might take 15 minutes or longer. The following output indicates that the process is complete:
All nodes rebooted
All nodes rebooted
Copy to Clipboard Copied! Monitor the update progress by running the following command:
oc adm wait-for-stable-cluster
$ oc adm wait-for-stable-cluster
Copy to Clipboard Copied! This process might take 15 minutes or longer. The following output indicates that the process is complete:
All clusteroperators are stable
All clusteroperators are stable
Copy to Clipboard Copied! Replace the combined
keys.json
file with the updatedkeys.json
file on the cloud provider by running the following command:az storage blob upload \ --overwrite \ --account-name ${AZURE_STORAGE_ACCOUNT} \ --container-name ${AZURE_STORAGE_CONTAINER} \ --name 'openid/v1/jwks' \ -f ${TEMPDIR}/jwks.new.json
$ az storage blob upload \ --overwrite \ --account-name ${AZURE_STORAGE_ACCOUNT} \ --container-name ${AZURE_STORAGE_CONTAINER} \ --name 'openid/v1/jwks' \ -f ${TEMPDIR}/jwks.new.json
Copy to Clipboard Copied!
10.1.4. Rotating IBM Cloud credentials
You can rotate API keys for your existing service IDs and update the corresponding secrets.
Prerequisites
-
You have configured the
ccoctl
utility. - You have existing service IDs in a live OpenShift Container Platform cluster installed.
Procedure
Use the
ccoctl
utility to rotate your API keys for the service IDs and update the secrets by running the following command:ccoctl <provider_name> refresh-keys \ --kubeconfig <openshift_kubeconfig_file> \ --credentials-requests-dir <path_to_credential_requests_directory> \ --name <name>
$ ccoctl <provider_name> refresh-keys \
1 --kubeconfig <openshift_kubeconfig_file> \
2 --credentials-requests-dir <path_to_credential_requests_directory> \
3 --name <name>
4 Copy to Clipboard Copied! NoteIf your cluster uses Technology Preview features that are enabled by the
TechPreviewNoUpgrade
feature set, you must include the--enable-tech-preview
parameter.
10.2. Rotating cloud provider credentials
Some organizations require the rotation of the cloud provider credentials. To allow the cluster to use the new credentials, you must update the secrets that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials.
10.2.1. Rotating cloud provider credentials manually
If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials.
The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential.
Prerequisites
Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using:
- For mint mode, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are supported.
- For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), and VMware vSphere are supported.
- You have changed the credentials that are used to interface with your cloud provider.
- The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster.
Procedure
-
In the Administrator perspective of the web console, navigate to Workloads
Secrets. In the table on the Secrets page, find the root secret for your cloud provider.
Platform Secret name AWS
aws-creds
Azure
azure-credentials
GCP
gcp-credentials
RHOSP
openstack-credentials
VMware vSphere
vsphere-creds
-
Click the Options menu
in the same row as the secret and select Edit Secret.
- Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials.
- Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save.
If you are updating the credentials for a vSphere cluster that does not have the vSphere CSI Driver Operator enabled, you must force a rollout of the Kubernetes controller manager to apply the updated credentials.
NoteIf the vSphere CSI Driver Operator is enabled, this step is not required.
To apply the updated vSphere credentials, log in to the OpenShift Container Platform CLI as a user with the
cluster-admin
role and run the following command:oc patch kubecontrollermanager cluster \ -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date )"'"}}' \ --type=merge
$ oc patch kubecontrollermanager cluster \ -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date )"'"}}' \ --type=merge
Copy to Clipboard Copied! While the credentials are rolling out, the status of the Kubernetes Controller Manager Operator reports
Progressing=true
. To view the status, run the following command:oc get co kube-controller-manager
$ oc get co kube-controller-manager
Copy to Clipboard Copied! If the CCO for your cluster is configured to use mint mode, delete each component secret that is referenced by the individual
CredentialsRequest
objects.-
Log in to the OpenShift Container Platform CLI as a user with the
cluster-admin
role. Get the names and namespaces of all referenced component secrets:
oc -n openshift-cloud-credential-operator get CredentialsRequest \ -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef'
$ oc -n openshift-cloud-credential-operator get CredentialsRequest \ -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef'
Copy to Clipboard Copied! where
<provider_spec>
is the corresponding value for your cloud provider:-
AWS:
AWSProviderSpec
-
GCP:
GCPProviderSpec
Partial example output for AWS
{ "name": "ebs-cloud-credentials", "namespace": "openshift-cluster-csi-drivers" } { "name": "cloud-credential-operator-iam-ro-creds", "namespace": "openshift-cloud-credential-operator" }
{ "name": "ebs-cloud-credentials", "namespace": "openshift-cluster-csi-drivers" } { "name": "cloud-credential-operator-iam-ro-creds", "namespace": "openshift-cloud-credential-operator" }
Copy to Clipboard Copied! -
AWS:
Delete each of the referenced component secrets:
oc delete secret <secret_name> \ -n <secret_namespace>
$ oc delete secret <secret_name> \
1 -n <secret_namespace>
2 Copy to Clipboard Copied! Example deletion of an AWS secret
oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers
$ oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers
Copy to Clipboard Copied! You do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones.
-
Log in to the OpenShift Container Platform CLI as a user with the
Verification
To verify that the credentials have changed:
-
In the Administrator perspective of the web console, navigate to Workloads
Secrets. - Verify that the contents of the Value field or fields have changed.
10.3. Removing cloud provider credentials
After installing OpenShift Container Platform, some organizations require the removal of the cloud provider credentials that were used during the initial installation. To allow the cluster to use the new credentials, you must update the secrets that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials.
10.3.1. Removing cloud provider credentials
For clusters that use the Cloud Credential Operator (CCO) in mint mode, the administrator-level credential is stored in the kube-system
namespace. The CCO uses the admin
credential to process the CredentialsRequest
objects in the cluster and create users for components with limited permissions.
After installing an OpenShift Container Platform cluster with the CCO in mint mode, you can remove the administrator-level credential secret from the kube-system
namespace in the cluster. The CCO only requires the administrator-level credential during changes that require reconciling new or modified CredentialsRequest
custom resources, such as minor cluster version updates.
Before performing a minor version cluster update (for example, updating from OpenShift Container Platform 4.18 to 4.19), you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the update might be blocked.
Prerequisites
- Your cluster is installed on a platform that supports removing cloud credentials from the CCO. Supported platforms are AWS and GCP.
Procedure
-
In the Administrator perspective of the web console, navigate to Workloads
Secrets. In the table on the Secrets page, find the root secret for your cloud provider.
Platform Secret name AWS
aws-creds
GCP
gcp-credentials
-
Click the Options menu
in the same row as the secret and select Delete Secret.
10.4. Enabling token-based authentication
After installing an Microsoft Azure OpenShift Container Platform cluster, you can enable Microsoft Entra Workload ID to use short-term credentials.
10.4.1. Configuring the Cloud Credential Operator utility
To configure an existing cluster to create and manage cloud credentials from outside of the cluster, extract and prepare the Cloud Credential Operator utility (ccoctl
) binary.
The ccoctl
utility is a Linux binary that must run in a Linux environment.
Prerequisites
- You have access to an OpenShift Container Platform account with cluster administrator access.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Set a variable for the OpenShift Container Platform release image by running the following command:
RELEASE_IMAGE=$(oc get clusterversion -o jsonpath={..desired.image})
$ RELEASE_IMAGE=$(oc get clusterversion -o jsonpath={..desired.image})
Copy to Clipboard Copied! Obtain the CCO container image from the OpenShift Container Platform release image by running the following command:
CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE -a ~/.pull-secret)
$ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE -a ~/.pull-secret)
Copy to Clipboard Copied! NoteEnsure that the architecture of the
$RELEASE_IMAGE
matches the architecture of the environment in which you will use theccoctl
tool.Extract the
ccoctl
binary from the CCO container image within the OpenShift Container Platform release image by running the following command:oc image extract $CCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ -a ~/.pull-secret
$ oc image extract $CCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \
1 -a ~/.pull-secret
Copy to Clipboard Copied! - 1
- For
<rhel_version>
, specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified,ccoctl.rhel8
is used by default. The following values are valid:-
rhel8
: Specify this value for hosts that use RHEL 8. -
rhel9
: Specify this value for hosts that use RHEL 9.
-
Change the permissions to make
ccoctl
executable by running the following command:chmod 775 ccoctl.<rhel_version>
$ chmod 775 ccoctl.<rhel_version>
Copy to Clipboard Copied!
Verification
To verify that
ccoctl
is ready to use, display the help file. Use a relative file name when you run the command, for example:./ccoctl.rhel9
$ ./ccoctl.rhel9
Copy to Clipboard Copied! Example output
OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command.
OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command.
Copy to Clipboard Copied!
10.4.2. Enabling Microsoft Entra Workload ID on an existing cluster
If you did not configure your Microsoft Azure OpenShift Container Platform cluster to use Microsoft Entra Workload ID during installation, you can enable this authentication method on an existing cluster.
The process to enable Workload ID on an existing cluster is disruptive and takes a significant amount of time. Before proceeding, observe the following considerations:
- Read the following steps and ensure that you understand and accept the time requirement. The exact time requirement varies depending on the individual cluster, but it is likely to require at least one hour.
- During this process, you must refresh all service accounts and restart all pods on the cluster. These actions are disruptive to workloads. To mitigate this impact, you can temporarily halt these services and then redeploy them when the cluster is ready.
- After starting this process, do not attempt to update the cluster until it is complete. If an update is triggered, the process to enable Workload ID on an existing cluster fails.
Prerequisites
- You have installed an OpenShift Container Platform cluster on Microsoft Azure.
-
You have access to the cluster using an account with
cluster-admin
permissions. -
You have installed the OpenShift CLI (
oc
). -
You have extracted and prepared the Cloud Credential Operator utility (
ccoctl
) binary. -
You have access to your Azure account by using the Azure CLI (
az
).
Procedure
-
Create an output directory for the manifests that the
ccoctl
utility generates. This procedure uses./output_dir
as an example. Extract the service account public signing key for the cluster to the output directory by running the following command:
oc get configmap \ --namespace openshift-kube-apiserver bound-sa-token-signing-certs \ --output 'go-template={{index .data "service-account-001.pub"}}' > ./output_dir/serviceaccount-signer.public
$ oc get configmap \ --namespace openshift-kube-apiserver bound-sa-token-signing-certs \ --output 'go-template={{index .data "service-account-001.pub"}}' > ./output_dir/serviceaccount-signer.public
1 Copy to Clipboard Copied! - 1
- This procedure uses a file named
serviceaccount-signer.public
as an example.
Use the extracted service account public signing key to create an OpenID Connect (OIDC) issuer and Azure blob storage container with OIDC configuration files by running the following command:
./ccoctl azure create-oidc-issuer \ --name <azure_infra_name> \ --output-dir ./output_dir \ --region <azure_region> \ --subscription-id <azure_subscription_id> \ --tenant-id <azure_tenant_id> \ --public-key-file ./output_dir/serviceaccount-signer.public
$ ./ccoctl azure create-oidc-issuer \ --name <azure_infra_name> \
1 --output-dir ./output_dir \ --region <azure_region> \
2 --subscription-id <azure_subscription_id> \
3 --tenant-id <azure_tenant_id> \ --public-key-file ./output_dir/serviceaccount-signer.public
4 Copy to Clipboard Copied! - 1
- The value of the
name
parameter is used to create an Azure resource group. To use an existing Azure resource group instead of creating a new one, specify the--oidc-resource-group-name
argument with the existing group name as its value. - 2
- Specify the region of the existing cluster.
- 3
- Specify the subscription ID of the existing cluster.
- 4
- Specify the file that contains the service account public signing key for the cluster.
Verify that the configuration file for the Azure pod identity webhook was created by running the following command:
ll ./output_dir/manifests
$ ll ./output_dir/manifests
Copy to Clipboard Copied! Example output
total 8 -rw-------. 1 cloud-user cloud-user 193 May 22 02:29 azure-ad-pod-identity-webhook-config.yaml -rw-------. 1 cloud-user cloud-user 165 May 22 02:29 cluster-authentication-02-config.yaml
total 8 -rw-------. 1 cloud-user cloud-user 193 May 22 02:29 azure-ad-pod-identity-webhook-config.yaml
1 -rw-------. 1 cloud-user cloud-user 165 May 22 02:29 cluster-authentication-02-config.yaml
Copy to Clipboard Copied! - 1
- The file
azure-ad-pod-identity-webhook-config.yaml
contains the Azure pod identity webhook configuration.
Set an
OIDC_ISSUER_URL
variable with the OIDC issuer URL from the generated manifests in the output directory by running the following command:OIDC_ISSUER_URL=`awk '/serviceAccountIssuer/ { print $2 }' ./output_dir/manifests/cluster-authentication-02-config.yaml`
$ OIDC_ISSUER_URL=`awk '/serviceAccountIssuer/ { print $2 }' ./output_dir/manifests/cluster-authentication-02-config.yaml`
Copy to Clipboard Copied! Update the
spec.serviceAccountIssuer
parameter of the clusterauthentication
configuration by running the following command:oc patch authentication cluster \ --type=merge \ -p "{\"spec\":{\"serviceAccountIssuer\":\"${OIDC_ISSUER_URL}\"}}"
$ oc patch authentication cluster \ --type=merge \ -p "{\"spec\":{\"serviceAccountIssuer\":\"${OIDC_ISSUER_URL}\"}}"
Copy to Clipboard Copied! Monitor the configuration update progress by running the following command:
oc adm wait-for-stable-cluster
$ oc adm wait-for-stable-cluster
Copy to Clipboard Copied! This process might take 15 minutes or longer. The following output indicates that the process is complete:
All clusteroperators are stable
All clusteroperators are stable
Copy to Clipboard Copied! Restart all of the pods in the cluster by running the following command:
oc adm reboot-machine-config-pool mcp/worker mcp/master
$ oc adm reboot-machine-config-pool mcp/worker mcp/master
Copy to Clipboard Copied! Restarting a pod updates the
serviceAccountIssuer
field and refreshes the service account public signing key.Monitor the restart and update process by running the following command:
oc adm wait-for-node-reboot nodes --all
$ oc adm wait-for-node-reboot nodes --all
Copy to Clipboard Copied! This process might take 15 minutes or longer. The following output indicates that the process is complete:
All nodes rebooted
All nodes rebooted
Copy to Clipboard Copied! Update the Cloud Credential Operator
spec.credentialsMode
parameter toManual
by running the following command:oc patch cloudcredential cluster \ --type=merge \ --patch '{"spec":{"credentialsMode":"Manual"}}'
$ oc patch cloudcredential cluster \ --type=merge \ --patch '{"spec":{"credentialsMode":"Manual"}}'
Copy to Clipboard Copied! Extract the list of
CredentialsRequest
objects from the OpenShift Container Platform release image by running the following command:oc adm release extract \ --credentials-requests \ --included \ --to <path_to_directory_for_credentials_requests> \ --registry-config ~/.pull-secret
$ oc adm release extract \ --credentials-requests \ --included \ --to <path_to_directory_for_credentials_requests> \ --registry-config ~/.pull-secret
Copy to Clipboard Copied! NoteThis command might take a few moments to run.
Set an
AZURE_INSTALL_RG
variable with the Azure resource group name by running the following command:AZURE_INSTALL_RG=`oc get infrastructure cluster -o jsonpath --template '{ .status.platformStatus.azure.resourceGroupName }'`
$ AZURE_INSTALL_RG=`oc get infrastructure cluster -o jsonpath --template '{ .status.platformStatus.azure.resourceGroupName }'`
Copy to Clipboard Copied! Use the
ccoctl
utility to create managed identities for allCredentialsRequest
objects by running the following command:NoteThe following command does not show all available options. For a complete list of options, including those that might be necessary for your specific use case, run
$ ccoctl azure create-managed-identities --help
.ccoctl azure create-managed-identities \ --name <azure_infra_name> \ --output-dir ./output_dir \ --region <azure_region> \ --subscription-id <azure_subscription_id> \ --credentials-requests-dir <path_to_directory_for_credentials_requests> \ --issuer-url "${OIDC_ISSUER_URL}" \ --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \ --installation-resource-group-name "${AZURE_INSTALL_RG}" \ --network-resource-group-name <azure_resource_group>
$ ccoctl azure create-managed-identities \ --name <azure_infra_name> \ --output-dir ./output_dir \ --region <azure_region> \ --subscription-id <azure_subscription_id> \ --credentials-requests-dir <path_to_directory_for_credentials_requests> \ --issuer-url "${OIDC_ISSUER_URL}" \ --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \
1 --installation-resource-group-name "${AZURE_INSTALL_RG}" \ --network-resource-group-name <azure_resource_group>
2 Copy to Clipboard Copied! Apply the Azure pod identity webhook configuration for Workload ID by running the following command:
oc apply -f ./output_dir/manifests/azure-ad-pod-identity-webhook-config.yaml
$ oc apply -f ./output_dir/manifests/azure-ad-pod-identity-webhook-config.yaml
Copy to Clipboard Copied! Apply the secrets generated by the
ccoctl
utility by running the following command:find ./output_dir/manifests -iname "openshift*yaml" -print0 | xargs -I {} -0 -t oc replace -f {}
$ find ./output_dir/manifests -iname "openshift*yaml" -print0 | xargs -I {} -0 -t oc replace -f {}
Copy to Clipboard Copied! This process might take several minutes.
Restart all of the pods in the cluster by running the following command:
oc adm reboot-machine-config-pool mcp/worker mcp/master
$ oc adm reboot-machine-config-pool mcp/worker mcp/master
Copy to Clipboard Copied! Restarting a pod updates the
serviceAccountIssuer
field and refreshes the service account public signing key.Monitor the restart and update process by running the following command:
oc adm wait-for-node-reboot nodes --all
$ oc adm wait-for-node-reboot nodes --all
Copy to Clipboard Copied! This process might take 15 minutes or longer. The following output indicates that the process is complete:
All nodes rebooted
All nodes rebooted
Copy to Clipboard Copied! Monitor the configuration update progress by running the following command:
oc adm wait-for-stable-cluster
$ oc adm wait-for-stable-cluster
Copy to Clipboard Copied! This process might take 15 minutes or longer. The following output indicates that the process is complete:
All clusteroperators are stable
All clusteroperators are stable
Copy to Clipboard Copied! Optional: Remove the Azure root credentials secret by running the following command:
oc delete secret -n kube-system azure-credentials
$ oc delete secret -n kube-system azure-credentials
Copy to Clipboard Copied!
10.4.3. Verifying that a cluster uses short-term credentials
You can verify that a cluster uses short-term security credentials for individual components by checking the Cloud Credential Operator (CCO) configuration and other values in the cluster.
Prerequisites
-
You deployed an OpenShift Container Platform cluster using the Cloud Credential Operator utility (
ccoctl
) to implement short-term credentials. -
You installed the OpenShift CLI (
oc
). -
You are logged in as a user with
cluster-admin
privileges.
Procedure
Verify that the CCO is configured to operate in manual mode by running the following command:
oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode}
$ oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode}
Copy to Clipboard Copied! The following output confirms that the CCO is operating in manual mode:
Example output
Manual
Manual
Copy to Clipboard Copied! Verify that the cluster does not have
root
credentials by running the following command:oc get secrets \ -n kube-system <secret_name>
$ oc get secrets \ -n kube-system <secret_name>
Copy to Clipboard Copied! where
<secret_name>
is the name of the root secret for your cloud provider.Platform Secret name Amazon Web Services (AWS)
aws-creds
Microsoft Azure
azure-credentials
Google Cloud Platform (GCP)
gcp-credentials
An error confirms that the root secret is not present on the cluster.
Example output for an AWS cluster
Error from server (NotFound): secrets "aws-creds" not found
Error from server (NotFound): secrets "aws-creds" not found
Copy to Clipboard Copied! Verify that the components are using short-term security credentials for individual components by running the following command:
oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }'
$ oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }'
Copy to Clipboard Copied! This command displays the value of the
.spec.serviceAccountIssuer
parameter in the clusterAuthentication
object. An output of a URL that is associated with your cloud provider indicates that the cluster is using manual mode with short-term credentials that are created and managed from outside of the cluster.Azure clusters: Verify that the components are assuming the Azure client ID that is specified in the secret manifests by running the following command:
oc get secrets \ -n openshift-image-registry installer-cloud-credentials \ -o jsonpath='{.data}'
$ oc get secrets \ -n openshift-image-registry installer-cloud-credentials \ -o jsonpath='{.data}'
Copy to Clipboard Copied! An output that contains the
azure_client_id
andazure_federated_token_file
felids confirms that the components are assuming the Azure client ID.Azure clusters: Verify that the pod identity webhook is running by running the following command:
oc get pods \ -n openshift-cloud-credential-operator
$ oc get pods \ -n openshift-cloud-credential-operator
Copy to Clipboard Copied! Example output
NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m
NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m
Copy to Clipboard Copied!